I0211 12:56:18.920868 9 e2e.go:243] Starting e2e run "1722da6f-c945-4f1f-94c0-e7d38bbc7010" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581425777 - Will randomize all specs Will run 215 of 4412 specs Feb 11 12:56:19.374: INFO: >>> kubeConfig: /root/.kube/config Feb 11 12:56:19.378: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 11 12:56:19.401: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 11 12:56:19.437: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 11 12:56:19.437: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 11 12:56:19.437: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 11 12:56:19.446: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 11 12:56:19.446: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 11 12:56:19.446: INFO: e2e test version: v1.15.7 Feb 11 12:56:19.447: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 12:56:19.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 11 12:56:19.574: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-623eea00-1a9b-467a-b504-782b1c991f35 STEP: Creating secret with name s-test-opt-upd-55fd2f02-8cc6-4e27-9df0-bf6967c0276c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-623eea00-1a9b-467a-b504-782b1c991f35 STEP: Updating secret s-test-opt-upd-55fd2f02-8cc6-4e27-9df0-bf6967c0276c STEP: Creating secret with name s-test-opt-create-08a46f15-04ed-48f0-a511-f6376b83d5ab STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 12:56:40.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8374" for this suite. Feb 11 12:57:02.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:57:02.457: INFO: namespace projected-8374 deletion completed in 22.28327328s • [SLOW TEST:43.009 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 12:57:02.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Feb 11 12:57:02.643: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix116837528/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 12:57:02.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3849" for this suite. Feb 11 12:57:08.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 12:57:08.967: INFO: namespace kubectl-3849 deletion completed in 6.158824056s • [SLOW TEST:6.509 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 12:57:08.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4612 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 11 12:57:09.112: INFO: Found 0 stateful pods, waiting for 3 Feb 11 12:57:19.163: INFO: Found 1 stateful pods, waiting for 3 Feb 11 12:57:29.120: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 12:57:29.120: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 12:57:29.120: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 11 12:57:39.131: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 12:57:39.131: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 12:57:39.131: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 11 12:57:49.152: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 11 12:57:49.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 11 12:57:49.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 11 12:57:49.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4612 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 12:57:53.250: INFO: stderr: "I0211 12:57:53.026859 49 log.go:172] (0xc000116e70) (0xc00058aa00) Create stream\nI0211 12:57:53.026970 49 log.go:172] (0xc000116e70) (0xc00058aa00) Stream added, broadcasting: 1\nI0211 12:57:53.030709 49 log.go:172] (0xc000116e70) Reply frame received for 1\nI0211 12:57:53.030773 49 log.go:172] (0xc000116e70) (0xc00072c000) Create stream\nI0211 12:57:53.030817 49 log.go:172] (0xc000116e70) (0xc00072c000) Stream added, broadcasting: 3\nI0211 12:57:53.031957 49 log.go:172] (0xc000116e70) Reply frame received for 3\nI0211 12:57:53.031999 49 log.go:172] (0xc000116e70) (0xc0009c00a0) Create stream\nI0211 12:57:53.032015 49 log.go:172] (0xc000116e70) (0xc0009c00a0) Stream added, broadcasting: 5\nI0211 12:57:53.032975 49 log.go:172] (0xc000116e70) Reply frame received for 5\nI0211 12:57:53.122514 49 log.go:172] (0xc000116e70) Data frame received for 5\nI0211 12:57:53.122637 49 log.go:172] (0xc0009c00a0) (5) Data frame handling\nI0211 12:57:53.122670 49 log.go:172] (0xc0009c00a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 12:57:53.150888 49 log.go:172] (0xc000116e70) Data frame received for 3\nI0211 12:57:53.150912 49 log.go:172] (0xc00072c000) (3) Data frame handling\nI0211 12:57:53.150924 49 log.go:172] (0xc00072c000) (3) Data frame sent\nI0211 12:57:53.233782 49 log.go:172] (0xc000116e70) (0xc00072c000) Stream removed, broadcasting: 3\nI0211 12:57:53.233960 49 log.go:172] (0xc000116e70) Data frame received for 1\nI0211 12:57:53.233993 49 log.go:172] (0xc00058aa00) (1) Data frame handling\nI0211 12:57:53.234026 49 log.go:172] (0xc00058aa00) (1) Data frame sent\nI0211 12:57:53.234047 49 log.go:172] (0xc000116e70) (0xc00058aa00) Stream removed, broadcasting: 1\nI0211 12:57:53.234084 49 log.go:172] (0xc000116e70) (0xc0009c00a0) Stream removed, broadcasting: 5\nI0211 12:57:53.234209 49 log.go:172] (0xc000116e70) Go away received\nI0211 12:57:53.235617 49 log.go:172] (0xc000116e70) (0xc00058aa00) Stream removed, broadcasting: 1\nI0211 12:57:53.235693 49 log.go:172] (0xc000116e70) (0xc00072c000) Stream removed, broadcasting: 3\nI0211 12:57:53.235721 49 log.go:172] (0xc000116e70) (0xc0009c00a0) Stream removed, broadcasting: 5\n" Feb 11 12:57:53.250: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 12:57:53.250: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 11 12:58:03.296: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 11 12:58:13.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4612 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 12:58:13.985: INFO: stderr: "I0211 12:58:13.712717 71 log.go:172] (0xc0007ea370) (0xc0007c2780) Create stream\nI0211 12:58:13.713144 71 log.go:172] (0xc0007ea370) (0xc0007c2780) Stream added, broadcasting: 1\nI0211 12:58:13.716623 71 log.go:172] (0xc0007ea370) Reply frame received for 1\nI0211 12:58:13.716705 71 log.go:172] (0xc0007ea370) (0xc0006ac320) Create stream\nI0211 12:58:13.716714 71 log.go:172] (0xc0007ea370) (0xc0006ac320) Stream added, broadcasting: 3\nI0211 12:58:13.718267 71 log.go:172] (0xc0007ea370) Reply frame received for 3\nI0211 12:58:13.718425 71 log.go:172] (0xc0007ea370) (0xc00077a0a0) Create stream\nI0211 12:58:13.718452 71 log.go:172] (0xc0007ea370) (0xc00077a0a0) Stream added, broadcasting: 5\nI0211 12:58:13.720003 71 log.go:172] (0xc0007ea370) Reply frame received for 5\nI0211 12:58:13.837972 71 log.go:172] (0xc0007ea370) Data frame received for 5\nI0211 12:58:13.838144 71 log.go:172] (0xc00077a0a0) (5) Data frame handling\nI0211 12:58:13.838169 71 log.go:172] (0xc00077a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0211 12:58:13.838214 71 log.go:172] (0xc0007ea370) Data frame received for 3\nI0211 12:58:13.838231 71 log.go:172] (0xc0006ac320) (3) Data frame handling\nI0211 12:58:13.838255 71 log.go:172] (0xc0006ac320) (3) Data frame sent\nI0211 12:58:13.968375 71 log.go:172] (0xc0007ea370) Data frame received for 1\nI0211 12:58:13.968592 71 log.go:172] (0xc0007ea370) (0xc00077a0a0) Stream removed, broadcasting: 5\nI0211 12:58:13.968698 71 log.go:172] (0xc0007c2780) (1) Data frame handling\nI0211 12:58:13.968728 71 log.go:172] (0xc0007c2780) (1) Data frame sent\nI0211 12:58:13.969051 71 log.go:172] (0xc0007ea370) (0xc0006ac320) Stream removed, broadcasting: 3\nI0211 12:58:13.969227 71 log.go:172] (0xc0007ea370) (0xc0007c2780) Stream removed, broadcasting: 1\nI0211 12:58:13.969549 71 log.go:172] (0xc0007ea370) Go away received\nI0211 12:58:13.971346 71 log.go:172] (0xc0007ea370) (0xc0007c2780) Stream removed, broadcasting: 1\nI0211 12:58:13.971413 71 log.go:172] (0xc0007ea370) (0xc0006ac320) Stream removed, broadcasting: 3\nI0211 12:58:13.971422 71 log.go:172] (0xc0007ea370) (0xc00077a0a0) Stream removed, broadcasting: 5\n" Feb 11 12:58:13.990: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 12:58:13.991: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 12:58:24.083: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 12:58:24.084: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:58:24.084: INFO: Waiting for Pod statefulset-4612/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:58:34.779: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 12:58:34.779: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:58:34.779: INFO: Waiting for Pod statefulset-4612/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:58:44.100: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 12:58:44.100: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:58:44.100: INFO: Waiting for Pod statefulset-4612/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:58:54.102: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 12:58:54.102: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:59:04.100: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 12:59:04.100: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 11 12:59:14.095: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update STEP: Rolling back to a previous revision Feb 11 12:59:24.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4612 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 11 12:59:25.572: INFO: stderr: "I0211 12:59:24.365527 90 log.go:172] (0xc0008682c0) (0xc00092e640) Create stream\nI0211 12:59:24.365839 90 log.go:172] (0xc0008682c0) (0xc00092e640) Stream added, broadcasting: 1\nI0211 12:59:24.369158 90 log.go:172] (0xc0008682c0) Reply frame received for 1\nI0211 12:59:24.369206 90 log.go:172] (0xc0008682c0) (0xc00092e6e0) Create stream\nI0211 12:59:24.369216 90 log.go:172] (0xc0008682c0) (0xc00092e6e0) Stream added, broadcasting: 3\nI0211 12:59:24.370165 90 log.go:172] (0xc0008682c0) Reply frame received for 3\nI0211 12:59:24.370183 90 log.go:172] (0xc0008682c0) (0xc00092e780) Create stream\nI0211 12:59:24.370190 90 log.go:172] (0xc0008682c0) (0xc00092e780) Stream added, broadcasting: 5\nI0211 12:59:24.371347 90 log.go:172] (0xc0008682c0) Reply frame received for 5\nI0211 12:59:25.256407 90 log.go:172] (0xc0008682c0) Data frame received for 5\nI0211 12:59:25.256507 90 log.go:172] (0xc00092e780) (5) Data frame handling\nI0211 12:59:25.256542 90 log.go:172] (0xc00092e780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 12:59:25.474055 90 log.go:172] (0xc0008682c0) Data frame received for 3\nI0211 12:59:25.474108 90 log.go:172] (0xc00092e6e0) (3) Data frame handling\nI0211 12:59:25.474127 90 log.go:172] (0xc00092e6e0) (3) Data frame sent\nI0211 12:59:25.559351 90 log.go:172] (0xc0008682c0) Data frame received for 1\nI0211 12:59:25.559396 90 log.go:172] (0xc00092e640) (1) Data frame handling\nI0211 12:59:25.559438 90 log.go:172] (0xc00092e640) (1) Data frame sent\nI0211 12:59:25.559461 90 log.go:172] (0xc0008682c0) (0xc00092e640) Stream removed, broadcasting: 1\nI0211 12:59:25.560169 90 log.go:172] (0xc0008682c0) (0xc00092e6e0) Stream removed, broadcasting: 3\nI0211 12:59:25.561276 90 log.go:172] (0xc0008682c0) (0xc00092e780) Stream removed, broadcasting: 5\nI0211 12:59:25.561325 90 log.go:172] (0xc0008682c0) (0xc00092e640) Stream removed, broadcasting: 1\nI0211 12:59:25.561343 90 log.go:172] (0xc0008682c0) (0xc00092e6e0) Stream removed, broadcasting: 3\nI0211 12:59:25.561357 90 log.go:172] (0xc0008682c0) (0xc00092e780) Stream removed, broadcasting: 5\n" Feb 11 12:59:25.572: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 11 12:59:25.572: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 11 12:59:35.744: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 11 12:59:45.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4612 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 11 12:59:46.339: INFO: stderr: "I0211 12:59:46.092678 110 log.go:172] (0xc00013adc0) (0xc00024c6e0) Create stream\nI0211 12:59:46.093036 110 log.go:172] (0xc00013adc0) (0xc00024c6e0) Stream added, broadcasting: 1\nI0211 12:59:46.097141 110 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0211 12:59:46.097231 110 log.go:172] (0xc00013adc0) (0xc00024c780) Create stream\nI0211 12:59:46.097269 110 log.go:172] (0xc00013adc0) (0xc00024c780) Stream added, broadcasting: 3\nI0211 12:59:46.097964 110 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0211 12:59:46.097993 110 log.go:172] (0xc00013adc0) (0xc00024c820) Create stream\nI0211 12:59:46.098000 110 log.go:172] (0xc00013adc0) (0xc00024c820) Stream added, broadcasting: 5\nI0211 12:59:46.098970 110 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0211 12:59:46.202080 110 log.go:172] (0xc00013adc0) Data frame received for 5\nI0211 12:59:46.202403 110 log.go:172] (0xc00024c820) (5) Data frame handling\nI0211 12:59:46.202467 110 log.go:172] (0xc00024c820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0211 12:59:46.202613 110 log.go:172] (0xc00013adc0) Data frame received for 3\nI0211 12:59:46.202653 110 log.go:172] (0xc00024c780) (3) Data frame handling\nI0211 12:59:46.202676 110 log.go:172] (0xc00024c780) (3) Data frame sent\nI0211 12:59:46.329634 110 log.go:172] (0xc00013adc0) Data frame received for 1\nI0211 12:59:46.330075 110 log.go:172] (0xc00013adc0) (0xc00024c780) Stream removed, broadcasting: 3\nI0211 12:59:46.330177 110 log.go:172] (0xc00024c6e0) (1) Data frame handling\nI0211 12:59:46.330215 110 log.go:172] (0xc00024c6e0) (1) Data frame sent\nI0211 12:59:46.330263 110 log.go:172] (0xc00013adc0) (0xc00024c820) Stream removed, broadcasting: 5\nI0211 12:59:46.330293 110 log.go:172] (0xc00013adc0) (0xc00024c6e0) Stream removed, broadcasting: 1\nI0211 12:59:46.330319 110 log.go:172] (0xc00013adc0) Go away received\nI0211 12:59:46.331543 110 log.go:172] (0xc00013adc0) (0xc00024c6e0) Stream removed, broadcasting: 1\nI0211 12:59:46.331554 110 log.go:172] (0xc00013adc0) (0xc00024c780) Stream removed, broadcasting: 3\nI0211 12:59:46.331561 110 log.go:172] (0xc00013adc0) (0xc00024c820) Stream removed, broadcasting: 5\n" Feb 11 12:59:46.340: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 11 12:59:46.340: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 11 12:59:56.383: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 12:59:56.384: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 11 12:59:56.384: INFO: Waiting for Pod statefulset-4612/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 11 12:59:56.384: INFO: Waiting for Pod statefulset-4612/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 11 13:00:07.783: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 13:00:07.783: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 11 13:00:07.783: INFO: Waiting for Pod statefulset-4612/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 11 13:00:16.439: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 13:00:16.439: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 11 13:00:26.401: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update Feb 11 13:00:26.401: INFO: Waiting for Pod statefulset-4612/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 11 13:00:36.399: INFO: Waiting for StatefulSet statefulset-4612/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 11 13:00:46.404: INFO: Deleting all statefulset in ns statefulset-4612 Feb 11 13:00:46.408: INFO: Scaling statefulset ss2 to 0 Feb 11 13:01:06.442: INFO: Waiting for statefulset status.replicas updated to 0 Feb 11 13:01:06.445: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:01:06.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4612" for this suite. Feb 11 13:01:14.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:01:14.657: INFO: namespace statefulset-4612 deletion completed in 8.153258313s • [SLOW TEST:245.690 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:01:14.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 11 13:01:14.739: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 11 13:01:14.754: INFO: Waiting for terminating namespaces to be deleted... Feb 11 13:01:14.756: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 11 13:01:14.774: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.774: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 13:01:14.774: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 11 13:01:14.774: INFO: Container weave ready: true, restart count 0 Feb 11 13:01:14.774: INFO: Container weave-npc ready: true, restart count 0 Feb 11 13:01:14.774: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.774: INFO: Container kube-bench ready: false, restart count 0 Feb 11 13:01:14.774: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 11 13:01:14.785: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.785: INFO: Container kube-controller-manager ready: true, restart count 21 Feb 11 13:01:14.785: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.785: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 13:01:14.785: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.785: INFO: Container kube-apiserver ready: true, restart count 0 Feb 11 13:01:14.785: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.785: INFO: Container kube-scheduler ready: true, restart count 13 Feb 11 13:01:14.785: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.785: INFO: Container coredns ready: true, restart count 0 Feb 11 13:01:14.785: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.785: INFO: Container etcd ready: true, restart count 0 Feb 11 13:01:14.785: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 11 13:01:14.785: INFO: Container weave ready: true, restart count 0 Feb 11 13:01:14.785: INFO: Container weave-npc ready: true, restart count 0 Feb 11 13:01:14.785: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 11 13:01:14.785: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-58ffe0c4-c3a6-4b9b-8c39-f2b7f6d45028 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-58ffe0c4-c3a6-4b9b-8c39-f2b7f6d45028 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-58ffe0c4-c3a6-4b9b-8c39-f2b7f6d45028 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:01:37.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3158" for this suite. Feb 11 13:02:07.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:02:07.264: INFO: namespace sched-pred-3158 deletion completed in 30.214867888s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:52.607 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:02:07.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9424b845-1707-4c78-b2a6-99741bd762c1 STEP: Creating configMap with name cm-test-opt-upd-b3405289-c223-4a6a-97e7-e7dbaa2b6449 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9424b845-1707-4c78-b2a6-99741bd762c1 STEP: Updating configmap cm-test-opt-upd-b3405289-c223-4a6a-97e7-e7dbaa2b6449 STEP: Creating configMap with name cm-test-opt-create-7ff79b03-13ea-40b0-8a2c-bad770839d5f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:03:29.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2253" for this suite. Feb 11 13:03:51.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:03:51.576: INFO: namespace configmap-2253 deletion completed in 22.181456089s • [SLOW TEST:104.311 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:03:51.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:03:51.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-15" for this suite. Feb 11 13:04:13.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:04:14.185: INFO: namespace pods-15 deletion completed in 22.379389975s • [SLOW TEST:22.609 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:04:14.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 11 13:04:14.376: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 11 13:04:19.467: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 11 13:04:23.533: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 11 13:04:25.544: INFO: Creating deployment "test-rollover-deployment" Feb 11 13:04:25.563: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 11 13:04:27.585: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 11 13:04:27.597: INFO: Ensure that both replica sets have 1 created replica Feb 11 13:04:27.607: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 11 13:04:27.619: INFO: Updating deployment test-rollover-deployment Feb 11 13:04:27.619: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 11 13:04:29.735: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 11 13:04:29.748: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 11 13:04:29.756: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:29.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023067, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:31.818: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:31.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023067, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:33.801: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:33.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023067, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:35.782: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:35.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023067, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:37.786: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:37.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023067, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:39.784: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:39.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023079, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:41.784: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:41.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023079, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:43.792: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:43.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023079, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:45.786: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:45.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023079, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:47.769: INFO: all replica sets need to contain the pod-template-hash label Feb 11 13:04:47.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023079, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023065, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:04:49.786: INFO: Feb 11 13:04:49.786: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 11 13:04:49.807: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-1110,SelfLink:/apis/apps/v1/namespaces/deployment-1110/deployments/test-rollover-deployment,UID:bbd9ece2-c174-42fc-b1de-4ca54e78f02f,ResourceVersion:23943292,Generation:2,CreationTimestamp:2020-02-11 13:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-11 13:04:25 +0000 UTC 2020-02-11 13:04:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-11 13:04:49 +0000 UTC 2020-02-11 13:04:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 11 13:04:49.817: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-1110,SelfLink:/apis/apps/v1/namespaces/deployment-1110/replicasets/test-rollover-deployment-854595fc44,UID:7866696a-cb89-473e-a038-fe3b9cb88d11,ResourceVersion:23943282,Generation:2,CreationTimestamp:2020-02-11 13:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment bbd9ece2-c174-42fc-b1de-4ca54e78f02f 0xc0021fbb07 0xc0021fbb08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 11 13:04:49.817: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 11 13:04:49.818: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-1110,SelfLink:/apis/apps/v1/namespaces/deployment-1110/replicasets/test-rollover-controller,UID:904a5100-a36e-4879-a48a-560da093bbcf,ResourceVersion:23943291,Generation:2,CreationTimestamp:2020-02-11 13:04:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment bbd9ece2-c174-42fc-b1de-4ca54e78f02f 0xc0021fb87f 0xc0021fb8a0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 11 13:04:49.818: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-1110,SelfLink:/apis/apps/v1/namespaces/deployment-1110/replicasets/test-rollover-deployment-9b8b997cf,UID:356910e7-7ee1-4f17-90af-8c2d42ef50c6,ResourceVersion:23943244,Generation:2,CreationTimestamp:2020-02-11 13:04:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment bbd9ece2-c174-42fc-b1de-4ca54e78f02f 0xc0021fbc50 0xc0021fbc51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 11 13:04:49.826: INFO: Pod "test-rollover-deployment-854595fc44-frmt2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-frmt2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-1110,SelfLink:/api/v1/namespaces/deployment-1110/pods/test-rollover-deployment-854595fc44-frmt2,UID:13ad9416-3fb1-4c41-a3af-b4dae9cd9b36,ResourceVersion:23943265,Generation:0,CreationTimestamp:2020-02-11 13:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 7866696a-cb89-473e-a038-fe3b9cb88d11 0xc000552927 0xc000552928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q5mc4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q5mc4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-q5mc4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000552aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000552ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:04:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:04:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:04:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:04:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-11 13:04:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-11 13:04:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f476f559c1830d878ad3a5a743fb9b7961ceb27bfafaefef8ce5d65af50266a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:04:49.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1110" for this suite. Feb 11 13:04:56.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:04:56.700: INFO: namespace deployment-1110 deletion completed in 6.868662557s • [SLOW TEST:42.514 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:04:56.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 11 13:04:56.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3666' Feb 11 13:04:57.503: INFO: stderr: "" Feb 11 13:04:57.503: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 11 13:04:57.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3666' Feb 11 13:04:57.893: INFO: stderr: "" Feb 11 13:04:57.893: INFO: stdout: "update-demo-nautilus-srr78 update-demo-nautilus-t7qt6 " Feb 11 13:04:57.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srr78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3666' Feb 11 13:04:58.098: INFO: stderr: "" Feb 11 13:04:58.098: INFO: stdout: "" Feb 11 13:04:58.099: INFO: update-demo-nautilus-srr78 is created but not running Feb 11 13:05:03.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3666' Feb 11 13:05:04.180: INFO: stderr: "" Feb 11 13:05:04.180: INFO: stdout: "update-demo-nautilus-srr78 update-demo-nautilus-t7qt6 " Feb 11 13:05:04.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srr78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3666' Feb 11 13:05:04.594: INFO: stderr: "" Feb 11 13:05:04.595: INFO: stdout: "" Feb 11 13:05:04.595: INFO: update-demo-nautilus-srr78 is created but not running Feb 11 13:05:09.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3666' Feb 11 13:05:09.808: INFO: stderr: "" Feb 11 13:05:09.809: INFO: stdout: "update-demo-nautilus-srr78 update-demo-nautilus-t7qt6 " Feb 11 13:05:09.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srr78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3666' Feb 11 13:05:10.001: INFO: stderr: "" Feb 11 13:05:10.002: INFO: stdout: "" Feb 11 13:05:10.002: INFO: update-demo-nautilus-srr78 is created but not running Feb 11 13:05:15.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3666' Feb 11 13:05:15.145: INFO: stderr: "" Feb 11 13:05:15.145: INFO: stdout: "update-demo-nautilus-srr78 update-demo-nautilus-t7qt6 " Feb 11 13:05:15.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srr78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3666' Feb 11 13:05:15.243: INFO: stderr: "" Feb 11 13:05:15.243: INFO: stdout: "true" Feb 11 13:05:15.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srr78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3666' Feb 11 13:05:15.355: INFO: stderr: "" Feb 11 13:05:15.355: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 11 13:05:15.355: INFO: validating pod update-demo-nautilus-srr78 Feb 11 13:05:15.388: INFO: got data: { "image": "nautilus.jpg" } Feb 11 13:05:15.388: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 11 13:05:15.388: INFO: update-demo-nautilus-srr78 is verified up and running Feb 11 13:05:15.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t7qt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3666' Feb 11 13:05:15.515: INFO: stderr: "" Feb 11 13:05:15.515: INFO: stdout: "true" Feb 11 13:05:15.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t7qt6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3666' Feb 11 13:05:15.655: INFO: stderr: "" Feb 11 13:05:15.655: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 11 13:05:15.655: INFO: validating pod update-demo-nautilus-t7qt6 Feb 11 13:05:15.661: INFO: got data: { "image": "nautilus.jpg" } Feb 11 13:05:15.661: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 11 13:05:15.661: INFO: update-demo-nautilus-t7qt6 is verified up and running STEP: using delete to clean up resources Feb 11 13:05:15.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3666' Feb 11 13:05:15.848: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 13:05:15.848: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 11 13:05:15.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3666' Feb 11 13:05:16.029: INFO: stderr: "No resources found.\n" Feb 11 13:05:16.029: INFO: stdout: "" Feb 11 13:05:16.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3666 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 11 13:05:16.242: INFO: stderr: "" Feb 11 13:05:16.242: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:05:16.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3666" for this suite. Feb 11 13:05:40.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:05:40.347: INFO: namespace kubectl-3666 deletion completed in 24.092003702s • [SLOW TEST:43.646 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:05:40.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 13:05:40.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8441' Feb 11 13:05:40.734: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 13:05:40.734: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Feb 11 13:05:42.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8441' Feb 11 13:05:43.163: INFO: stderr: "" Feb 11 13:05:43.163: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:05:43.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8441" for this suite. Feb 11 13:05:49.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:05:49.374: INFO: namespace kubectl-8441 deletion completed in 6.105713354s • [SLOW TEST:9.027 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:05:49.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:06:49.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8814" for this suite. Feb 11 13:07:11.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:07:11.905: INFO: namespace container-probe-8814 deletion completed in 22.449734279s • [SLOW TEST:82.529 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:07:11.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 11 13:07:12.078: INFO: Waiting up to 5m0s for pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca" in namespace "var-expansion-272" to be "success or failure" Feb 11 13:07:12.124: INFO: Pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 45.810823ms Feb 11 13:07:14.397: INFO: Pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318465072s Feb 11 13:07:16.408: INFO: Pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329837245s Feb 11 13:07:18.419: INFO: Pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340505542s Feb 11 13:07:20.426: INFO: Pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347514556s Feb 11 13:07:22.486: INFO: Pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.407837353s STEP: Saw pod success Feb 11 13:07:22.486: INFO: Pod "var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca" satisfied condition "success or failure" Feb 11 13:07:22.493: INFO: Trying to get logs from node iruya-node pod var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca container dapi-container: STEP: delete the pod Feb 11 13:07:22.546: INFO: Waiting for pod var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca to disappear Feb 11 13:07:22.555: INFO: Pod var-expansion-24c81152-d8e0-45ee-b12d-17db10f9c4ca no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:07:22.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-272" for this suite. Feb 11 13:07:28.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:07:28.751: INFO: namespace var-expansion-272 deletion completed in 6.186765222s • [SLOW TEST:16.846 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:07:28.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ebf4dbc0-15e6-478b-bfd7-54a264c6dc76 STEP: Creating a pod to test consume secrets Feb 11 13:07:28.853: INFO: Waiting up to 5m0s for pod "pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826" in namespace "secrets-8780" to be "success or failure" Feb 11 13:07:28.891: INFO: Pod "pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826": Phase="Pending", Reason="", readiness=false. Elapsed: 37.886791ms Feb 11 13:07:30.903: INFO: Pod "pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049945601s Feb 11 13:07:32.912: INFO: Pod "pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058942004s Feb 11 13:07:34.956: INFO: Pod "pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102187273s Feb 11 13:07:37.714: INFO: Pod "pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.860859849s STEP: Saw pod success Feb 11 13:07:37.714: INFO: Pod "pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826" satisfied condition "success or failure" Feb 11 13:07:37.720: INFO: Trying to get logs from node iruya-node pod pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826 container secret-volume-test: STEP: delete the pod Feb 11 13:07:37.803: INFO: Waiting for pod pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826 to disappear Feb 11 13:07:37.858: INFO: Pod pod-secrets-99cd1fa7-42f1-4eaa-aa83-da68ca180826 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:07:37.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8780" for this suite. Feb 11 13:07:45.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:07:46.034: INFO: namespace secrets-8780 deletion completed in 8.163278151s • [SLOW TEST:17.282 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:07:46.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 13:07:46.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1630' Feb 11 13:07:46.293: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 13:07:46.293: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Feb 11 13:07:48.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1630' Feb 11 13:07:48.788: INFO: stderr: "" Feb 11 13:07:48.789: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:07:48.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1630" for this suite. Feb 11 13:07:54.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:07:55.145: INFO: namespace kubectl-1630 deletion completed in 6.34152106s • [SLOW TEST:9.111 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:07:55.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 11 13:07:55.404: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e2dbe28c-e215-4edb-953e-76c9dbf46e73", Controller:(*bool)(0xc0016170aa), BlockOwnerDeletion:(*bool)(0xc0016170ab)}} Feb 11 13:07:55.424: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0f3853a5-96e8-44dc-9e75-055ab243ded8", Controller:(*bool)(0xc0016173aa), BlockOwnerDeletion:(*bool)(0xc0016173ab)}} Feb 11 13:07:55.470: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bc9bc737-d232-473a-9863-7618206181dd", Controller:(*bool)(0xc00258a752), BlockOwnerDeletion:(*bool)(0xc00258a753)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:08:00.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8736" for this suite. Feb 11 13:08:06.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:08:06.771: INFO: namespace gc-8736 deletion completed in 6.175685561s • [SLOW TEST:11.625 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:08:06.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 11 13:08:41.020: INFO: Container started at 2020-02-11 13:08:16 +0000 UTC, pod became ready at 2020-02-11 13:08:40 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:08:41.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7916" for this suite. Feb 11 13:09:03.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:09:03.180: INFO: namespace container-probe-7916 deletion completed in 22.154394887s • [SLOW TEST:56.408 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:09:03.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 11 13:09:03.475: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2334,SelfLink:/api/v1/namespaces/watch-2334/configmaps/e2e-watch-test-resource-version,UID:833f56db-feb3-4e21-8a21-f5f3ff93bfdc,ResourceVersion:23943919,Generation:0,CreationTimestamp:2020-02-11 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 11 13:09:03.476: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2334,SelfLink:/api/v1/namespaces/watch-2334/configmaps/e2e-watch-test-resource-version,UID:833f56db-feb3-4e21-8a21-f5f3ff93bfdc,ResourceVersion:23943920,Generation:0,CreationTimestamp:2020-02-11 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:09:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2334" for this suite. Feb 11 13:09:09.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:09:09.608: INFO: namespace watch-2334 deletion completed in 6.122183087s • [SLOW TEST:6.428 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:09:09.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-82f2113c-72b7-4399-b4e3-926da147acaa STEP: Creating a pod to test consume secrets Feb 11 13:09:09.821: INFO: Waiting up to 5m0s for pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075" in namespace "secrets-543" to be "success or failure" Feb 11 13:09:09.834: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075": Phase="Pending", Reason="", readiness=false. Elapsed: 12.379261ms Feb 11 13:09:11.859: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037634025s Feb 11 13:09:13.867: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046040712s Feb 11 13:09:15.878: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056550709s Feb 11 13:09:17.885: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06391846s Feb 11 13:09:19.895: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075": Phase="Running", Reason="", readiness=true. Elapsed: 10.073501732s Feb 11 13:09:21.902: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.081153099s STEP: Saw pod success Feb 11 13:09:21.902: INFO: Pod "pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075" satisfied condition "success or failure" Feb 11 13:09:21.906: INFO: Trying to get logs from node iruya-node pod pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075 container secret-volume-test: STEP: delete the pod Feb 11 13:09:22.012: INFO: Waiting for pod pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075 to disappear Feb 11 13:09:22.020: INFO: Pod pod-secrets-4d2aa789-1e69-4e39-b70b-98cb4b6ef075 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:09:22.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-543" for this suite. Feb 11 13:09:28.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:09:28.232: INFO: namespace secrets-543 deletion completed in 6.198467293s • [SLOW TEST:18.624 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:09:28.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b3c28e40-878e-49e7-b1d2-6ed31fbc1f48 STEP: Creating a pod to test consume secrets Feb 11 13:09:28.631: INFO: Waiting up to 5m0s for pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887" in namespace "secrets-5435" to be "success or failure" Feb 11 13:09:28.671: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887": Phase="Pending", Reason="", readiness=false. Elapsed: 39.879519ms Feb 11 13:09:30.692: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060319852s Feb 11 13:09:32.713: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081098498s Feb 11 13:09:34.724: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092906661s Feb 11 13:09:36.736: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104144873s Feb 11 13:09:38.749: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117422032s Feb 11 13:09:40.756: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.124880296s STEP: Saw pod success Feb 11 13:09:40.757: INFO: Pod "pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887" satisfied condition "success or failure" Feb 11 13:09:40.760: INFO: Trying to get logs from node iruya-node pod pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887 container secret-volume-test: STEP: delete the pod Feb 11 13:09:40.805: INFO: Waiting for pod pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887 to disappear Feb 11 13:09:40.836: INFO: Pod pod-secrets-4caef117-3df5-4ad5-b321-2cbdb4265887 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:09:40.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5435" for this suite. Feb 11 13:09:46.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:09:47.042: INFO: namespace secrets-5435 deletion completed in 6.20043789s STEP: Destroying namespace "secret-namespace-9760" for this suite. Feb 11 13:09:53.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:09:53.287: INFO: namespace secret-namespace-9760 deletion completed in 6.244742872s • [SLOW TEST:25.054 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:09:53.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 11 13:09:53.501: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Feb 11 13:09:54.247: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 11 13:09:56.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:09:58.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:10:00.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:10:02.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:10:04.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:10:06.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717023394, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 13:10:12.017: INFO: Waited 3.289556729s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:10:13.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5931" for this suite. Feb 11 13:10:19.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:10:19.421: INFO: namespace aggregator-5931 deletion completed in 6.406741479s • [SLOW TEST:26.133 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:10:19.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2582 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 11 13:10:20.703: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 11 13:11:00.987: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2582 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 13:11:00.988: INFO: >>> kubeConfig: /root/.kube/config I0211 13:11:01.083573 9 log.go:172] (0xc000bccdc0) (0xc0003b2b40) Create stream I0211 13:11:01.083744 9 log.go:172] (0xc000bccdc0) (0xc0003b2b40) Stream added, broadcasting: 1 I0211 13:11:01.095520 9 log.go:172] (0xc000bccdc0) Reply frame received for 1 I0211 13:11:01.095587 9 log.go:172] (0xc000bccdc0) (0xc00122a280) Create stream I0211 13:11:01.095599 9 log.go:172] (0xc000bccdc0) (0xc00122a280) Stream added, broadcasting: 3 I0211 13:11:01.097482 9 log.go:172] (0xc000bccdc0) Reply frame received for 3 I0211 13:11:01.097515 9 log.go:172] (0xc000bccdc0) (0xc000246280) Create stream I0211 13:11:01.097523 9 log.go:172] (0xc000bccdc0) (0xc000246280) Stream added, broadcasting: 5 I0211 13:11:01.099131 9 log.go:172] (0xc000bccdc0) Reply frame received for 5 I0211 13:11:02.254508 9 log.go:172] (0xc000bccdc0) Data frame received for 3 I0211 13:11:02.254653 9 log.go:172] (0xc00122a280) (3) Data frame handling I0211 13:11:02.254691 9 log.go:172] (0xc00122a280) (3) Data frame sent I0211 13:11:02.384285 9 log.go:172] (0xc000bccdc0) Data frame received for 1 I0211 13:11:02.384414 9 log.go:172] (0xc000bccdc0) (0xc00122a280) Stream removed, broadcasting: 3 I0211 13:11:02.384490 9 log.go:172] (0xc0003b2b40) (1) Data frame handling I0211 13:11:02.384532 9 log.go:172] (0xc0003b2b40) (1) Data frame sent I0211 13:11:02.384581 9 log.go:172] (0xc000bccdc0) (0xc000246280) Stream removed, broadcasting: 5 I0211 13:11:02.384659 9 log.go:172] (0xc000bccdc0) (0xc0003b2b40) Stream removed, broadcasting: 1 I0211 13:11:02.384686 9 log.go:172] (0xc000bccdc0) Go away received I0211 13:11:02.385014 9 log.go:172] (0xc000bccdc0) (0xc0003b2b40) Stream removed, broadcasting: 1 I0211 13:11:02.385036 9 log.go:172] (0xc000bccdc0) (0xc00122a280) Stream removed, broadcasting: 3 I0211 13:11:02.385050 9 log.go:172] (0xc000bccdc0) (0xc000246280) Stream removed, broadcasting: 5 Feb 11 13:11:02.385: INFO: Found all expected endpoints: [netserver-0] Feb 11 13:11:02.394: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2582 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 11 13:11:02.395: INFO: >>> kubeConfig: /root/.kube/config I0211 13:11:02.492266 9 log.go:172] (0xc000a26840) (0xc000246d20) Create stream I0211 13:11:02.492356 9 log.go:172] (0xc000a26840) (0xc000246d20) Stream added, broadcasting: 1 I0211 13:11:02.509796 9 log.go:172] (0xc000a26840) Reply frame received for 1 I0211 13:11:02.509940 9 log.go:172] (0xc000a26840) (0xc00122a3c0) Create stream I0211 13:11:02.509989 9 log.go:172] (0xc000a26840) (0xc00122a3c0) Stream added, broadcasting: 3 I0211 13:11:02.514148 9 log.go:172] (0xc000a26840) Reply frame received for 3 I0211 13:11:02.514214 9 log.go:172] (0xc000a26840) (0xc0003b2be0) Create stream I0211 13:11:02.514229 9 log.go:172] (0xc000a26840) (0xc0003b2be0) Stream added, broadcasting: 5 I0211 13:11:02.516246 9 log.go:172] (0xc000a26840) Reply frame received for 5 I0211 13:11:03.854217 9 log.go:172] (0xc000a26840) Data frame received for 3 I0211 13:11:03.854369 9 log.go:172] (0xc00122a3c0) (3) Data frame handling I0211 13:11:03.854432 9 log.go:172] (0xc00122a3c0) (3) Data frame sent I0211 13:11:04.097837 9 log.go:172] (0xc000a26840) Data frame received for 1 I0211 13:11:04.098155 9 log.go:172] (0xc000a26840) (0xc00122a3c0) Stream removed, broadcasting: 3 I0211 13:11:04.098437 9 log.go:172] (0xc000246d20) (1) Data frame handling I0211 13:11:04.098504 9 log.go:172] (0xc000246d20) (1) Data frame sent I0211 13:11:04.098534 9 log.go:172] (0xc000a26840) (0xc000246d20) Stream removed, broadcasting: 1 I0211 13:11:04.098678 9 log.go:172] (0xc000a26840) (0xc0003b2be0) Stream removed, broadcasting: 5 I0211 13:11:04.098806 9 log.go:172] (0xc000a26840) Go away received I0211 13:11:04.099126 9 log.go:172] (0xc000a26840) (0xc000246d20) Stream removed, broadcasting: 1 I0211 13:11:04.099169 9 log.go:172] (0xc000a26840) (0xc00122a3c0) Stream removed, broadcasting: 3 I0211 13:11:04.099184 9 log.go:172] (0xc000a26840) (0xc0003b2be0) Stream removed, broadcasting: 5 Feb 11 13:11:04.099: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:11:04.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2582" for this suite. Feb 11 13:11:28.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:11:28.270: INFO: namespace pod-network-test-2582 deletion completed in 24.157825219s • [SLOW TEST:68.848 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:11:28.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:11:34.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-399" for this suite. Feb 11 13:11:40.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:11:40.372: INFO: namespace watch-399 deletion completed in 6.317706565s • [SLOW TEST:12.103 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:11:40.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-39514386-2167-4043-a800-6e31bc287444 STEP: Creating a pod to test consume secrets Feb 11 13:11:40.508: INFO: Waiting up to 5m0s for pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f" in namespace "secrets-4260" to be "success or failure" Feb 11 13:11:40.517: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130844ms Feb 11 13:11:42.530: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021637717s Feb 11 13:11:44.544: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035459089s Feb 11 13:11:46.560: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0512751s Feb 11 13:11:48.582: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073124022s Feb 11 13:11:50.594: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08564243s Feb 11 13:11:52.608: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.099703776s STEP: Saw pod success Feb 11 13:11:52.608: INFO: Pod "pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f" satisfied condition "success or failure" Feb 11 13:11:52.614: INFO: Trying to get logs from node iruya-node pod pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f container secret-volume-test: STEP: delete the pod Feb 11 13:11:52.751: INFO: Waiting for pod pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f to disappear Feb 11 13:11:52.759: INFO: Pod pod-secrets-2a961a7b-58a1-4d5e-a3e0-0cc703e7780f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:11:52.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4260" for this suite. Feb 11 13:11:58.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:11:58.936: INFO: namespace secrets-4260 deletion completed in 6.171467695s • [SLOW TEST:18.564 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:11:58.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-lmr2 STEP: Creating a pod to test atomic-volume-subpath Feb 11 13:11:59.138: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lmr2" in namespace "subpath-7270" to be "success or failure" Feb 11 13:11:59.214: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Pending", Reason="", readiness=false. Elapsed: 75.356937ms Feb 11 13:12:01.234: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095956838s Feb 11 13:12:03.249: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11039773s Feb 11 13:12:05.261: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122544804s Feb 11 13:12:07.270: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131685477s Feb 11 13:12:09.278: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 10.139615659s Feb 11 13:12:11.288: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 12.149560885s Feb 11 13:12:13.302: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 14.16325696s Feb 11 13:12:15.312: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 16.173337533s Feb 11 13:12:17.321: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 18.183013682s Feb 11 13:12:19.334: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 20.19519023s Feb 11 13:12:21.346: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 22.20756805s Feb 11 13:12:23.359: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 24.221176479s Feb 11 13:12:25.373: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 26.234898676s Feb 11 13:12:27.383: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 28.244249458s Feb 11 13:12:29.392: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Running", Reason="", readiness=true. Elapsed: 30.253489458s Feb 11 13:12:31.401: INFO: Pod "pod-subpath-test-configmap-lmr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.262369571s STEP: Saw pod success Feb 11 13:12:31.401: INFO: Pod "pod-subpath-test-configmap-lmr2" satisfied condition "success or failure" Feb 11 13:12:31.405: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-lmr2 container test-container-subpath-configmap-lmr2: STEP: delete the pod Feb 11 13:12:32.104: INFO: Waiting for pod pod-subpath-test-configmap-lmr2 to disappear Feb 11 13:12:32.113: INFO: Pod pod-subpath-test-configmap-lmr2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lmr2 Feb 11 13:12:32.113: INFO: Deleting pod "pod-subpath-test-configmap-lmr2" in namespace "subpath-7270" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:12:32.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7270" for this suite. Feb 11 13:12:38.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:12:38.341: INFO: namespace subpath-7270 deletion completed in 6.204738109s • [SLOW TEST:39.404 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:12:38.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-c0dfdbd5-8ac5-43ba-9420-ce4f95455982 STEP: Creating a pod to test consume secrets Feb 11 13:12:38.695: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186" in namespace "projected-2159" to be "success or failure" Feb 11 13:12:38.721: INFO: Pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186": Phase="Pending", Reason="", readiness=false. Elapsed: 24.650332ms Feb 11 13:12:40.737: INFO: Pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04044926s Feb 11 13:12:42.746: INFO: Pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050145296s Feb 11 13:12:44.758: INFO: Pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061696631s Feb 11 13:12:46.770: INFO: Pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074236875s Feb 11 13:12:48.805: INFO: Pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108431673s STEP: Saw pod success Feb 11 13:12:48.805: INFO: Pod "pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186" satisfied condition "success or failure" Feb 11 13:12:48.822: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186 container projected-secret-volume-test: STEP: delete the pod Feb 11 13:12:49.173: INFO: Waiting for pod pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186 to disappear Feb 11 13:12:49.181: INFO: Pod pod-projected-secrets-715c3eb1-e3f0-4abc-bd82-365c8ce6a186 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:12:49.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2159" for this suite. Feb 11 13:12:55.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:12:55.317: INFO: namespace projected-2159 deletion completed in 6.119524433s • [SLOW TEST:16.975 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:12:55.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:13:05.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6508" for this suite. Feb 11 13:13:11.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:13:11.928: INFO: namespace emptydir-wrapper-6508 deletion completed in 6.294243472s • [SLOW TEST:16.611 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:13:11.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-8aa23fef-c8b3-4153-ada5-ad3a7ade9a05 STEP: Creating secret with name s-test-opt-upd-4af7b55e-fa4d-4713-970d-bcb7e79bdf29 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8aa23fef-c8b3-4153-ada5-ad3a7ade9a05 STEP: Updating secret s-test-opt-upd-4af7b55e-fa4d-4713-970d-bcb7e79bdf29 STEP: Creating secret with name s-test-opt-create-d934eb9e-4def-439b-a329-4041891bef4b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:14:38.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8244" for this suite. Feb 11 13:15:02.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:15:02.713: INFO: namespace secrets-8244 deletion completed in 24.161376212s • [SLOW TEST:110.783 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:15:02.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-h5t4 STEP: Creating a pod to test atomic-volume-subpath Feb 11 13:15:02.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h5t4" in namespace "subpath-9025" to be "success or failure" Feb 11 13:15:02.884: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 79.739025ms Feb 11 13:15:04.890: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086243141s Feb 11 13:15:06.902: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098003339s Feb 11 13:15:08.914: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109957044s Feb 11 13:15:10.926: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122080079s Feb 11 13:15:12.939: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 10.135158945s Feb 11 13:15:14.950: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 12.145514796s Feb 11 13:15:16.966: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 14.16148931s Feb 11 13:15:18.979: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 16.174836144s Feb 11 13:15:20.991: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 18.187161813s Feb 11 13:15:23.003: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 20.198746327s Feb 11 13:15:25.012: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 22.20832679s Feb 11 13:15:27.029: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 24.224760499s Feb 11 13:15:29.045: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 26.240870467s Feb 11 13:15:31.056: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 28.25153454s Feb 11 13:15:33.071: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Running", Reason="", readiness=true. Elapsed: 30.266654967s Feb 11 13:15:35.082: INFO: Pod "pod-subpath-test-configmap-h5t4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.277489881s STEP: Saw pod success Feb 11 13:15:35.082: INFO: Pod "pod-subpath-test-configmap-h5t4" satisfied condition "success or failure" Feb 11 13:15:35.086: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-h5t4 container test-container-subpath-configmap-h5t4: STEP: delete the pod Feb 11 13:15:35.133: INFO: Waiting for pod pod-subpath-test-configmap-h5t4 to disappear Feb 11 13:15:35.138: INFO: Pod pod-subpath-test-configmap-h5t4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-h5t4 Feb 11 13:15:35.138: INFO: Deleting pod "pod-subpath-test-configmap-h5t4" in namespace "subpath-9025" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:15:35.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9025" for this suite. Feb 11 13:15:41.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:15:41.399: INFO: namespace subpath-9025 deletion completed in 6.249485317s • [SLOW TEST:38.686 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:15:41.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-32f287f4-cd9e-4bf1-8fa7-de257d5b62f0 STEP: Creating a pod to test consume configMaps Feb 11 13:15:41.485: INFO: Waiting up to 5m0s for pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3" in namespace "configmap-3439" to be "success or failure" Feb 11 13:15:41.543: INFO: Pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3": Phase="Pending", Reason="", readiness=false. Elapsed: 58.445604ms Feb 11 13:15:43.566: INFO: Pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081265831s Feb 11 13:15:45.573: INFO: Pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088181496s Feb 11 13:15:47.600: INFO: Pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115447905s Feb 11 13:15:49.608: INFO: Pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123492223s Feb 11 13:15:51.622: INFO: Pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136817053s STEP: Saw pod success Feb 11 13:15:51.622: INFO: Pod "pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3" satisfied condition "success or failure" Feb 11 13:15:51.626: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3 container configmap-volume-test: STEP: delete the pod Feb 11 13:15:51.730: INFO: Waiting for pod pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3 to disappear Feb 11 13:15:51.777: INFO: Pod pod-configmaps-4664a721-4618-4102-b390-95a04ed861d3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:15:51.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3439" for this suite. Feb 11 13:15:57.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:15:58.053: INFO: namespace configmap-3439 deletion completed in 6.256611237s • [SLOW TEST:16.654 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:15:58.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0211 13:16:08.325977 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 11 13:16:08.326: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:16:08.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8675" for this suite. Feb 11 13:16:14.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:16:14.717: INFO: namespace gc-8675 deletion completed in 6.385525808s • [SLOW TEST:16.664 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:16:14.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 11 13:16:14.972: INFO: Waiting up to 5m0s for pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7" in namespace "downward-api-658" to be "success or failure" Feb 11 13:16:14.997: INFO: Pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.655729ms Feb 11 13:16:17.021: INFO: Pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049630302s Feb 11 13:16:19.048: INFO: Pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076326775s Feb 11 13:16:21.119: INFO: Pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147060519s Feb 11 13:16:23.129: INFO: Pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156719176s Feb 11 13:16:25.138: INFO: Pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166575014s STEP: Saw pod success Feb 11 13:16:25.139: INFO: Pod "downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7" satisfied condition "success or failure" Feb 11 13:16:25.142: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7 container client-container: STEP: delete the pod Feb 11 13:16:25.222: INFO: Waiting for pod downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7 to disappear Feb 11 13:16:25.232: INFO: Pod downwardapi-volume-641ab62e-5a09-453f-b9da-ea13f7feb0a7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:16:25.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-658" for this suite. Feb 11 13:16:31.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:16:31.405: INFO: namespace downward-api-658 deletion completed in 6.165100876s • [SLOW TEST:16.687 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:16:31.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 11 13:16:40.407: INFO: Successfully updated pod "labelsupdate2d8bbfe0-2615-4438-9b6d-c8c8b30a6a29" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:16:42.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6212" for this suite. Feb 11 13:17:04.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:17:04.773: INFO: namespace projected-6212 deletion completed in 22.17932319s • [SLOW TEST:33.368 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:17:04.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 11 13:17:04.962: INFO: Waiting up to 5m0s for pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30" in namespace "downward-api-445" to be "success or failure" Feb 11 13:17:05.017: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30": Phase="Pending", Reason="", readiness=false. Elapsed: 54.061702ms Feb 11 13:17:07.023: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060203912s Feb 11 13:17:09.031: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06840946s Feb 11 13:17:11.043: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080045411s Feb 11 13:17:13.051: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088782633s Feb 11 13:17:15.064: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30": Phase="Running", Reason="", readiness=true. Elapsed: 10.100979913s Feb 11 13:17:17.079: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.116438886s STEP: Saw pod success Feb 11 13:17:17.079: INFO: Pod "downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30" satisfied condition "success or failure" Feb 11 13:17:17.086: INFO: Trying to get logs from node iruya-node pod downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30 container dapi-container: STEP: delete the pod Feb 11 13:17:17.146: INFO: Waiting for pod downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30 to disappear Feb 11 13:17:17.160: INFO: Pod downward-api-a7ea0ac5-adab-4bac-9929-a34789a07e30 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:17:17.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-445" for this suite. Feb 11 13:17:23.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:17:23.402: INFO: namespace downward-api-445 deletion completed in 6.160199843s • [SLOW TEST:18.628 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:17:23.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 11 13:17:34.086: INFO: Successfully updated pod "pod-update-12d9c6a9-99d0-41c6-9321-db8bd88a20f8" STEP: verifying the updated pod is in kubernetes Feb 11 13:17:34.105: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:17:34.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4826" for this suite. Feb 11 13:18:12.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:18:12.244: INFO: namespace pods-4826 deletion completed in 38.132791477s • [SLOW TEST:48.841 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:18:12.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 11 13:18:23.317: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-506f6260-1f92-4b82-a326-7787110b70be,GenerateName:,Namespace:events-9263,SelfLink:/api/v1/namespaces/events-9263/pods/send-events-506f6260-1f92-4b82-a326-7787110b70be,UID:c5a5fc79-bcd3-40a8-a898-610456839ab6,ResourceVersion:23945327,Generation:0,CreationTimestamp:2020-02-11 13:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 201719483,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wsrhh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wsrhh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wsrhh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029c3970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029c3990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:18:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:18:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:18:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:18:13 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-11 13:18:13 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-11 13:18:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://f21e039170134612c4035181b50d1c7f25a7dfd0a1ed4b0034b779f27416b61b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 11 13:18:25.326: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 11 13:18:27.336: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:18:27.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9263" for this suite. Feb 11 13:19:05.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:19:05.636: INFO: namespace events-9263 deletion completed in 38.277555087s • [SLOW TEST:53.391 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:19:05.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-404df742-39ea-4fe1-9180-007eba4276b9 STEP: Creating a pod to test consume configMaps Feb 11 13:19:05.830: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615" in namespace "projected-1710" to be "success or failure" Feb 11 13:19:05.840: INFO: Pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615": Phase="Pending", Reason="", readiness=false. Elapsed: 9.750387ms Feb 11 13:19:07.879: INFO: Pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048810403s Feb 11 13:19:09.907: INFO: Pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076639155s Feb 11 13:19:11.940: INFO: Pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109551308s Feb 11 13:19:13.955: INFO: Pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124608485s Feb 11 13:19:15.990: INFO: Pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158903435s STEP: Saw pod success Feb 11 13:19:15.990: INFO: Pod "pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615" satisfied condition "success or failure" Feb 11 13:19:15.995: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615 container projected-configmap-volume-test: STEP: delete the pod Feb 11 13:19:16.211: INFO: Waiting for pod pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615 to disappear Feb 11 13:19:16.248: INFO: Pod pod-projected-configmaps-529f06f8-73c8-471f-9373-47ba47533615 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:19:16.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1710" for this suite. Feb 11 13:19:22.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:19:22.410: INFO: namespace projected-1710 deletion completed in 6.152199884s • [SLOW TEST:16.773 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:19:22.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9218 I0211 13:19:22.480308 9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9218, replica count: 1 I0211 13:19:23.531570 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 13:19:24.532523 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 13:19:25.533580 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 13:19:26.534284 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 13:19:27.534828 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 13:19:28.535438 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 13:19:29.536132 9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 11 13:19:29.718: INFO: Created: latency-svc-pl6pc Feb 11 13:19:29.739: INFO: Got endpoints: latency-svc-pl6pc [102.423764ms] Feb 11 13:19:29.897: INFO: Created: latency-svc-8rbgj Feb 11 13:19:29.940: INFO: Got endpoints: latency-svc-8rbgj [200.487864ms] Feb 11 13:19:29.943: INFO: Created: latency-svc-njsqg Feb 11 13:19:29.959: INFO: Got endpoints: latency-svc-njsqg [218.128175ms] Feb 11 13:19:30.127: INFO: Created: latency-svc-k59rh Feb 11 13:19:30.139: INFO: Got endpoints: latency-svc-k59rh [399.316672ms] Feb 11 13:19:30.189: INFO: Created: latency-svc-gj59g Feb 11 13:19:30.349: INFO: Got endpoints: latency-svc-gj59g [607.966188ms] Feb 11 13:19:30.353: INFO: Created: latency-svc-w2n94 Feb 11 13:19:30.375: INFO: Got endpoints: latency-svc-w2n94 [633.655204ms] Feb 11 13:19:30.441: INFO: Created: latency-svc-8fll7 Feb 11 13:19:30.583: INFO: Got endpoints: latency-svc-8fll7 [843.467045ms] Feb 11 13:19:30.607: INFO: Created: latency-svc-9blh2 Feb 11 13:19:30.646: INFO: Got endpoints: latency-svc-9blh2 [270.813438ms] Feb 11 13:19:30.679: INFO: Created: latency-svc-rsbvw Feb 11 13:19:30.746: INFO: Got endpoints: latency-svc-rsbvw [1.004791982s] Feb 11 13:19:30.795: INFO: Created: latency-svc-dpdzt Feb 11 13:19:30.800: INFO: Got endpoints: latency-svc-dpdzt [1.060481161s] Feb 11 13:19:30.855: INFO: Created: latency-svc-2l7g4 Feb 11 13:19:30.929: INFO: Got endpoints: latency-svc-2l7g4 [1.188533728s] Feb 11 13:19:30.988: INFO: Created: latency-svc-hxz4g Feb 11 13:19:31.095: INFO: Got endpoints: latency-svc-hxz4g [1.354796072s] Feb 11 13:19:31.128: INFO: Created: latency-svc-v6m6n Feb 11 13:19:31.153: INFO: Got endpoints: latency-svc-v6m6n [1.411635926s] Feb 11 13:19:31.159: INFO: Created: latency-svc-lx2d8 Feb 11 13:19:31.166: INFO: Got endpoints: latency-svc-lx2d8 [1.425110477s] Feb 11 13:19:31.313: INFO: Created: latency-svc-jxhc7 Feb 11 13:19:31.324: INFO: Got endpoints: latency-svc-jxhc7 [1.582178645s] Feb 11 13:19:31.393: INFO: Created: latency-svc-7xcbr Feb 11 13:19:31.402: INFO: Got endpoints: latency-svc-7xcbr [1.661360335s] Feb 11 13:19:31.504: INFO: Created: latency-svc-f9z6v Feb 11 13:19:31.514: INFO: Got endpoints: latency-svc-f9z6v [1.772390106s] Feb 11 13:19:31.564: INFO: Created: latency-svc-6w5hf Feb 11 13:19:31.579: INFO: Got endpoints: latency-svc-6w5hf [1.638648392s] Feb 11 13:19:31.671: INFO: Created: latency-svc-trdd9 Feb 11 13:19:31.685: INFO: Got endpoints: latency-svc-trdd9 [1.725768549s] Feb 11 13:19:31.751: INFO: Created: latency-svc-h2fkt Feb 11 13:19:31.755: INFO: Got endpoints: latency-svc-h2fkt [1.615638298s] Feb 11 13:19:31.882: INFO: Created: latency-svc-qrpnw Feb 11 13:19:31.889: INFO: Got endpoints: latency-svc-qrpnw [1.539167351s] Feb 11 13:19:31.952: INFO: Created: latency-svc-tlqhk Feb 11 13:19:31.952: INFO: Got endpoints: latency-svc-tlqhk [1.368615943s] Feb 11 13:19:32.079: INFO: Created: latency-svc-2zqql Feb 11 13:19:32.120: INFO: Got endpoints: latency-svc-2zqql [1.474292878s] Feb 11 13:19:32.124: INFO: Created: latency-svc-s4plk Feb 11 13:19:32.133: INFO: Got endpoints: latency-svc-s4plk [1.387015738s] Feb 11 13:19:32.294: INFO: Created: latency-svc-hrsfn Feb 11 13:19:32.306: INFO: Got endpoints: latency-svc-hrsfn [1.505394863s] Feb 11 13:19:32.472: INFO: Created: latency-svc-f4sgv Feb 11 13:19:32.486: INFO: Got endpoints: latency-svc-f4sgv [1.556123898s] Feb 11 13:19:32.536: INFO: Created: latency-svc-sbllk Feb 11 13:19:32.554: INFO: Got endpoints: latency-svc-sbllk [1.459073887s] Feb 11 13:19:32.675: INFO: Created: latency-svc-vz6v7 Feb 11 13:19:32.690: INFO: Got endpoints: latency-svc-vz6v7 [1.536918457s] Feb 11 13:19:32.735: INFO: Created: latency-svc-s8js7 Feb 11 13:19:32.743: INFO: Got endpoints: latency-svc-s8js7 [1.576805849s] Feb 11 13:19:32.865: INFO: Created: latency-svc-ccx89 Feb 11 13:19:32.903: INFO: Got endpoints: latency-svc-ccx89 [1.578608981s] Feb 11 13:19:33.044: INFO: Created: latency-svc-8bmx8 Feb 11 13:19:33.052: INFO: Got endpoints: latency-svc-8bmx8 [1.649435575s] Feb 11 13:19:33.099: INFO: Created: latency-svc-k2fbd Feb 11 13:19:33.109: INFO: Got endpoints: latency-svc-k2fbd [1.594637232s] Feb 11 13:19:33.291: INFO: Created: latency-svc-6dkdq Feb 11 13:19:33.311: INFO: Got endpoints: latency-svc-6dkdq [1.731907898s] Feb 11 13:19:33.347: INFO: Created: latency-svc-v4kxh Feb 11 13:19:33.353: INFO: Got endpoints: latency-svc-v4kxh [1.66746137s] Feb 11 13:19:33.477: INFO: Created: latency-svc-7ttjl Feb 11 13:19:33.488: INFO: Got endpoints: latency-svc-7ttjl [1.732845837s] Feb 11 13:19:33.546: INFO: Created: latency-svc-t29g4 Feb 11 13:19:33.599: INFO: Got endpoints: latency-svc-t29g4 [1.709584594s] Feb 11 13:19:33.623: INFO: Created: latency-svc-x8qpx Feb 11 13:19:33.693: INFO: Got endpoints: latency-svc-x8qpx [1.740854475s] Feb 11 13:19:33.694: INFO: Created: latency-svc-8d999 Feb 11 13:19:33.777: INFO: Created: latency-svc-h5bdw Feb 11 13:19:33.788: INFO: Got endpoints: latency-svc-8d999 [1.66685059s] Feb 11 13:19:33.830: INFO: Got endpoints: latency-svc-h5bdw [1.696103269s] Feb 11 13:19:33.835: INFO: Created: latency-svc-tjkdw Feb 11 13:19:33.844: INFO: Got endpoints: latency-svc-tjkdw [1.537730495s] Feb 11 13:19:33.950: INFO: Created: latency-svc-fkxm6 Feb 11 13:19:33.963: INFO: Got endpoints: latency-svc-fkxm6 [1.476697572s] Feb 11 13:19:34.007: INFO: Created: latency-svc-s5k8m Feb 11 13:19:34.025: INFO: Got endpoints: latency-svc-s5k8m [1.46953591s] Feb 11 13:19:34.161: INFO: Created: latency-svc-s5ks6 Feb 11 13:19:34.186: INFO: Got endpoints: latency-svc-s5ks6 [1.496133963s] Feb 11 13:19:34.322: INFO: Created: latency-svc-5hjgl Feb 11 13:19:34.329: INFO: Got endpoints: latency-svc-5hjgl [1.585538254s] Feb 11 13:19:34.398: INFO: Created: latency-svc-dhkv4 Feb 11 13:19:34.404: INFO: Got endpoints: latency-svc-dhkv4 [1.50066409s] Feb 11 13:19:34.558: INFO: Created: latency-svc-7kpqn Feb 11 13:19:34.569: INFO: Got endpoints: latency-svc-7kpqn [1.516750021s] Feb 11 13:19:34.625: INFO: Created: latency-svc-hcf5c Feb 11 13:19:34.625: INFO: Got endpoints: latency-svc-hcf5c [1.516169612s] Feb 11 13:19:34.714: INFO: Created: latency-svc-9ts6g Feb 11 13:19:34.727: INFO: Got endpoints: latency-svc-9ts6g [1.415000996s] Feb 11 13:19:34.783: INFO: Created: latency-svc-xkzn7 Feb 11 13:19:34.783: INFO: Got endpoints: latency-svc-xkzn7 [1.429992502s] Feb 11 13:19:34.889: INFO: Created: latency-svc-b58sw Feb 11 13:19:34.899: INFO: Got endpoints: latency-svc-b58sw [1.411111951s] Feb 11 13:19:34.951: INFO: Created: latency-svc-kbrh8 Feb 11 13:19:35.019: INFO: Got endpoints: latency-svc-kbrh8 [1.419818934s] Feb 11 13:19:35.046: INFO: Created: latency-svc-94js8 Feb 11 13:19:35.068: INFO: Got endpoints: latency-svc-94js8 [1.374188181s] Feb 11 13:19:35.279: INFO: Created: latency-svc-5ckgj Feb 11 13:19:35.401: INFO: Created: latency-svc-pjm59 Feb 11 13:19:35.401: INFO: Got endpoints: latency-svc-5ckgj [1.613375338s] Feb 11 13:19:35.484: INFO: Created: latency-svc-dr96w Feb 11 13:19:35.485: INFO: Got endpoints: latency-svc-pjm59 [1.654271243s] Feb 11 13:19:35.593: INFO: Got endpoints: latency-svc-dr96w [1.749177033s] Feb 11 13:19:35.625: INFO: Created: latency-svc-g8wd7 Feb 11 13:19:35.640: INFO: Got endpoints: latency-svc-g8wd7 [1.676174146s] Feb 11 13:19:35.682: INFO: Created: latency-svc-78wwc Feb 11 13:19:35.782: INFO: Got endpoints: latency-svc-78wwc [1.756340181s] Feb 11 13:19:35.823: INFO: Created: latency-svc-2bfg2 Feb 11 13:19:35.831: INFO: Got endpoints: latency-svc-2bfg2 [1.644318242s] Feb 11 13:19:35.955: INFO: Created: latency-svc-2t4b8 Feb 11 13:19:35.991: INFO: Got endpoints: latency-svc-2t4b8 [1.660928815s] Feb 11 13:19:36.008: INFO: Created: latency-svc-2xh4g Feb 11 13:19:36.008: INFO: Got endpoints: latency-svc-2xh4g [1.604292131s] Feb 11 13:19:36.155: INFO: Created: latency-svc-jrscf Feb 11 13:19:36.168: INFO: Got endpoints: latency-svc-jrscf [1.598133117s] Feb 11 13:19:36.214: INFO: Created: latency-svc-5l4q8 Feb 11 13:19:36.224: INFO: Got endpoints: latency-svc-5l4q8 [1.599277434s] Feb 11 13:19:36.375: INFO: Created: latency-svc-sm2dg Feb 11 13:19:36.392: INFO: Got endpoints: latency-svc-sm2dg [1.665239022s] Feb 11 13:19:36.557: INFO: Created: latency-svc-hmjn2 Feb 11 13:19:36.636: INFO: Got endpoints: latency-svc-hmjn2 [1.852176212s] Feb 11 13:19:36.732: INFO: Created: latency-svc-mljdh Feb 11 13:19:36.741: INFO: Got endpoints: latency-svc-mljdh [1.841360069s] Feb 11 13:19:36.809: INFO: Created: latency-svc-gspfn Feb 11 13:19:36.827: INFO: Got endpoints: latency-svc-gspfn [1.807515852s] Feb 11 13:19:36.909: INFO: Created: latency-svc-c89mh Feb 11 13:19:36.916: INFO: Got endpoints: latency-svc-c89mh [1.847384032s] Feb 11 13:19:36.954: INFO: Created: latency-svc-vq8ht Feb 11 13:19:36.963: INFO: Got endpoints: latency-svc-vq8ht [1.561309951s] Feb 11 13:19:37.108: INFO: Created: latency-svc-qsc56 Feb 11 13:19:37.115: INFO: Got endpoints: latency-svc-qsc56 [1.630001693s] Feb 11 13:19:37.239: INFO: Created: latency-svc-2hszl Feb 11 13:19:37.245: INFO: Got endpoints: latency-svc-2hszl [1.651641082s] Feb 11 13:19:37.414: INFO: Created: latency-svc-rlwkg Feb 11 13:19:37.428: INFO: Got endpoints: latency-svc-rlwkg [1.787602416s] Feb 11 13:19:37.467: INFO: Created: latency-svc-hsn2x Feb 11 13:19:37.467: INFO: Got endpoints: latency-svc-hsn2x [1.685228353s] Feb 11 13:19:37.551: INFO: Created: latency-svc-k5tqd Feb 11 13:19:37.556: INFO: Got endpoints: latency-svc-k5tqd [1.724165445s] Feb 11 13:19:37.586: INFO: Created: latency-svc-ftczl Feb 11 13:19:37.595: INFO: Got endpoints: latency-svc-ftczl [1.604427094s] Feb 11 13:19:37.643: INFO: Created: latency-svc-llx9n Feb 11 13:19:37.708: INFO: Got endpoints: latency-svc-llx9n [1.699567622s] Feb 11 13:19:37.741: INFO: Created: latency-svc-kbq9q Feb 11 13:19:37.741: INFO: Got endpoints: latency-svc-kbq9q [1.573431797s] Feb 11 13:19:37.812: INFO: Created: latency-svc-r45bn Feb 11 13:19:37.877: INFO: Got endpoints: latency-svc-r45bn [1.65281676s] Feb 11 13:19:37.892: INFO: Created: latency-svc-fb7x2 Feb 11 13:19:37.908: INFO: Got endpoints: latency-svc-fb7x2 [1.51550584s] Feb 11 13:19:37.929: INFO: Created: latency-svc-v8wtz Feb 11 13:19:37.941: INFO: Got endpoints: latency-svc-v8wtz [1.304774575s] Feb 11 13:19:37.975: INFO: Created: latency-svc-7b5hk Feb 11 13:19:38.040: INFO: Got endpoints: latency-svc-7b5hk [1.29893164s] Feb 11 13:19:38.086: INFO: Created: latency-svc-7hx58 Feb 11 13:19:38.144: INFO: Created: latency-svc-kgfnv Feb 11 13:19:38.230: INFO: Got endpoints: latency-svc-kgfnv [1.313853479s] Feb 11 13:19:38.230: INFO: Got endpoints: latency-svc-7hx58 [1.402469657s] Feb 11 13:19:38.299: INFO: Created: latency-svc-hhj9h Feb 11 13:19:38.299: INFO: Got endpoints: latency-svc-hhj9h [1.336263108s] Feb 11 13:19:38.402: INFO: Created: latency-svc-9cg59 Feb 11 13:19:38.409: INFO: Got endpoints: latency-svc-9cg59 [1.294307117s] Feb 11 13:19:38.473: INFO: Created: latency-svc-4j85z Feb 11 13:19:38.594: INFO: Got endpoints: latency-svc-4j85z [1.348562505s] Feb 11 13:19:38.619: INFO: Created: latency-svc-mqphx Feb 11 13:19:38.633: INFO: Got endpoints: latency-svc-mqphx [1.205296644s] Feb 11 13:19:38.665: INFO: Created: latency-svc-zvdqr Feb 11 13:19:38.666: INFO: Got endpoints: latency-svc-zvdqr [1.198778819s] Feb 11 13:19:38.754: INFO: Created: latency-svc-hwrrl Feb 11 13:19:38.771: INFO: Got endpoints: latency-svc-hwrrl [1.215590998s] Feb 11 13:19:38.818: INFO: Created: latency-svc-qhc2q Feb 11 13:19:38.836: INFO: Got endpoints: latency-svc-qhc2q [1.240931804s] Feb 11 13:19:38.929: INFO: Created: latency-svc-q2b96 Feb 11 13:19:38.934: INFO: Got endpoints: latency-svc-q2b96 [1.225439161s] Feb 11 13:19:39.019: INFO: Created: latency-svc-gxdfl Feb 11 13:19:39.092: INFO: Got endpoints: latency-svc-gxdfl [1.350589794s] Feb 11 13:19:39.120: INFO: Created: latency-svc-8l2pp Feb 11 13:19:39.120: INFO: Got endpoints: latency-svc-8l2pp [1.242175896s] Feb 11 13:19:39.148: INFO: Created: latency-svc-56qtm Feb 11 13:19:39.348: INFO: Got endpoints: latency-svc-56qtm [1.439926603s] Feb 11 13:19:39.353: INFO: Created: latency-svc-fr7l9 Feb 11 13:19:39.363: INFO: Got endpoints: latency-svc-fr7l9 [1.421944582s] Feb 11 13:19:39.402: INFO: Created: latency-svc-64zbc Feb 11 13:19:39.409: INFO: Got endpoints: latency-svc-64zbc [1.368697337s] Feb 11 13:19:39.544: INFO: Created: latency-svc-dnrs5 Feb 11 13:19:39.557: INFO: Got endpoints: latency-svc-dnrs5 [1.325790955s] Feb 11 13:19:39.589: INFO: Created: latency-svc-sjr9d Feb 11 13:19:39.593: INFO: Got endpoints: latency-svc-sjr9d [1.362475208s] Feb 11 13:19:39.627: INFO: Created: latency-svc-6f4b6 Feb 11 13:19:39.632: INFO: Got endpoints: latency-svc-6f4b6 [1.3324686s] Feb 11 13:19:39.765: INFO: Created: latency-svc-ntbbk Feb 11 13:19:39.766: INFO: Got endpoints: latency-svc-ntbbk [1.355915055s] Feb 11 13:19:39.818: INFO: Created: latency-svc-f9g8d Feb 11 13:19:39.819: INFO: Got endpoints: latency-svc-f9g8d [1.223972231s] Feb 11 13:19:39.939: INFO: Created: latency-svc-kn5cn Feb 11 13:19:39.962: INFO: Got endpoints: latency-svc-kn5cn [1.32889076s] Feb 11 13:19:40.032: INFO: Created: latency-svc-zffk5 Feb 11 13:19:40.033: INFO: Got endpoints: latency-svc-zffk5 [1.366684701s] Feb 11 13:19:40.249: INFO: Created: latency-svc-2dsdq Feb 11 13:19:40.258: INFO: Got endpoints: latency-svc-2dsdq [1.485922641s] Feb 11 13:19:40.301: INFO: Created: latency-svc-b4gjp Feb 11 13:19:40.327: INFO: Got endpoints: latency-svc-b4gjp [1.490904041s] Feb 11 13:19:40.585: INFO: Created: latency-svc-r5p6z Feb 11 13:19:40.589: INFO: Got endpoints: latency-svc-r5p6z [1.65546389s] Feb 11 13:19:40.658: INFO: Created: latency-svc-rpdz5 Feb 11 13:19:40.669: INFO: Got endpoints: latency-svc-rpdz5 [1.576776602s] Feb 11 13:19:40.836: INFO: Created: latency-svc-c29v2 Feb 11 13:19:40.837: INFO: Got endpoints: latency-svc-c29v2 [1.716890086s] Feb 11 13:19:40.878: INFO: Created: latency-svc-2zskc Feb 11 13:19:40.883: INFO: Got endpoints: latency-svc-2zskc [1.534206381s] Feb 11 13:19:41.015: INFO: Created: latency-svc-2rn56 Feb 11 13:19:41.019: INFO: Got endpoints: latency-svc-2rn56 [1.65515802s] Feb 11 13:19:41.069: INFO: Created: latency-svc-r4dx8 Feb 11 13:19:41.077: INFO: Got endpoints: latency-svc-r4dx8 [1.667874366s] Feb 11 13:19:41.236: INFO: Created: latency-svc-q9682 Feb 11 13:19:41.241: INFO: Got endpoints: latency-svc-q9682 [1.684121437s] Feb 11 13:19:41.310: INFO: Created: latency-svc-ztkbz Feb 11 13:19:41.322: INFO: Got endpoints: latency-svc-ztkbz [1.729126662s] Feb 11 13:19:41.558: INFO: Created: latency-svc-g97jc Feb 11 13:19:41.569: INFO: Got endpoints: latency-svc-g97jc [1.936637364s] Feb 11 13:19:41.626: INFO: Created: latency-svc-8rkwz Feb 11 13:19:41.732: INFO: Got endpoints: latency-svc-8rkwz [1.966453435s] Feb 11 13:19:41.742: INFO: Created: latency-svc-crb5r Feb 11 13:19:41.788: INFO: Got endpoints: latency-svc-crb5r [1.969401522s] Feb 11 13:19:41.803: INFO: Created: latency-svc-hgk6v Feb 11 13:19:41.803: INFO: Got endpoints: latency-svc-hgk6v [1.840736102s] Feb 11 13:19:41.925: INFO: Created: latency-svc-qld52 Feb 11 13:19:41.939: INFO: Got endpoints: latency-svc-qld52 [1.905765781s] Feb 11 13:19:41.987: INFO: Created: latency-svc-x78tb Feb 11 13:19:41.994: INFO: Got endpoints: latency-svc-x78tb [1.735555193s] Feb 11 13:19:42.119: INFO: Created: latency-svc-px4cp Feb 11 13:19:42.161: INFO: Got endpoints: latency-svc-px4cp [1.833273559s] Feb 11 13:19:42.184: INFO: Created: latency-svc-768hd Feb 11 13:19:42.218: INFO: Got endpoints: latency-svc-768hd [1.628395525s] Feb 11 13:19:42.329: INFO: Created: latency-svc-v8nx6 Feb 11 13:19:42.337: INFO: Got endpoints: latency-svc-v8nx6 [1.667816389s] Feb 11 13:19:42.393: INFO: Created: latency-svc-6xbxv Feb 11 13:19:42.554: INFO: Got endpoints: latency-svc-6xbxv [1.716141922s] Feb 11 13:19:42.570: INFO: Created: latency-svc-2htx5 Feb 11 13:19:42.605: INFO: Got endpoints: latency-svc-2htx5 [1.722038697s] Feb 11 13:19:42.749: INFO: Created: latency-svc-d99r5 Feb 11 13:19:42.765: INFO: Got endpoints: latency-svc-d99r5 [1.746593907s] Feb 11 13:19:42.847: INFO: Created: latency-svc-l5kf5 Feb 11 13:19:42.946: INFO: Got endpoints: latency-svc-l5kf5 [1.868330822s] Feb 11 13:19:42.956: INFO: Created: latency-svc-626rd Feb 11 13:19:42.975: INFO: Got endpoints: latency-svc-626rd [1.73348681s] Feb 11 13:19:43.022: INFO: Created: latency-svc-jbkl4 Feb 11 13:19:43.028: INFO: Got endpoints: latency-svc-jbkl4 [1.704833917s] Feb 11 13:19:43.183: INFO: Created: latency-svc-hkszf Feb 11 13:19:43.189: INFO: Got endpoints: latency-svc-hkszf [1.620086783s] Feb 11 13:19:43.230: INFO: Created: latency-svc-vc769 Feb 11 13:19:43.245: INFO: Got endpoints: latency-svc-vc769 [1.512225567s] Feb 11 13:19:43.413: INFO: Created: latency-svc-j8sz4 Feb 11 13:19:43.431: INFO: Got endpoints: latency-svc-j8sz4 [1.642436219s] Feb 11 13:19:43.487: INFO: Created: latency-svc-m8tsf Feb 11 13:19:43.613: INFO: Got endpoints: latency-svc-m8tsf [1.809627186s] Feb 11 13:19:43.622: INFO: Created: latency-svc-bslsz Feb 11 13:19:43.638: INFO: Got endpoints: latency-svc-bslsz [1.698311067s] Feb 11 13:19:43.694: INFO: Created: latency-svc-nmw6v Feb 11 13:19:43.835: INFO: Got endpoints: latency-svc-nmw6v [1.840360218s] Feb 11 13:19:43.918: INFO: Created: latency-svc-mzg94 Feb 11 13:19:44.048: INFO: Got endpoints: latency-svc-mzg94 [1.886711004s] Feb 11 13:19:44.204: INFO: Created: latency-svc-gl9gv Feb 11 13:19:44.204: INFO: Created: latency-svc-zgxkt Feb 11 13:19:44.214: INFO: Got endpoints: latency-svc-gl9gv [1.995378696s] Feb 11 13:19:44.224: INFO: Got endpoints: latency-svc-zgxkt [1.886521549s] Feb 11 13:19:44.249: INFO: Created: latency-svc-k2d68 Feb 11 13:19:44.254: INFO: Got endpoints: latency-svc-k2d68 [1.6996318s] Feb 11 13:19:44.287: INFO: Created: latency-svc-nkkgn Feb 11 13:19:44.295: INFO: Got endpoints: latency-svc-nkkgn [1.688696909s] Feb 11 13:19:44.404: INFO: Created: latency-svc-z6xv9 Feb 11 13:19:44.407: INFO: Got endpoints: latency-svc-z6xv9 [1.641251887s] Feb 11 13:19:44.449: INFO: Created: latency-svc-pqvps Feb 11 13:19:44.498: INFO: Got endpoints: latency-svc-pqvps [1.552486322s] Feb 11 13:19:44.499: INFO: Created: latency-svc-tn2p4 Feb 11 13:19:44.674: INFO: Got endpoints: latency-svc-tn2p4 [1.69864333s] Feb 11 13:19:44.695: INFO: Created: latency-svc-c7622 Feb 11 13:19:44.708: INFO: Got endpoints: latency-svc-c7622 [1.6797254s] Feb 11 13:19:44.740: INFO: Created: latency-svc-d6xjg Feb 11 13:19:44.751: INFO: Got endpoints: latency-svc-d6xjg [1.561896227s] Feb 11 13:19:44.861: INFO: Created: latency-svc-c5qd2 Feb 11 13:19:44.868: INFO: Got endpoints: latency-svc-c5qd2 [1.621668219s] Feb 11 13:19:44.915: INFO: Created: latency-svc-jc8hl Feb 11 13:19:44.924: INFO: Got endpoints: latency-svc-jc8hl [1.492766538s] Feb 11 13:19:45.027: INFO: Created: latency-svc-vj7lm Feb 11 13:19:45.031: INFO: Got endpoints: latency-svc-vj7lm [1.416949695s] Feb 11 13:19:45.067: INFO: Created: latency-svc-8sqpb Feb 11 13:19:45.106: INFO: Got endpoints: latency-svc-8sqpb [1.468434556s] Feb 11 13:19:45.242: INFO: Created: latency-svc-5qb6g Feb 11 13:19:45.249: INFO: Got endpoints: latency-svc-5qb6g [1.413804812s] Feb 11 13:19:45.349: INFO: Created: latency-svc-tqbv2 Feb 11 13:19:45.550: INFO: Got endpoints: latency-svc-tqbv2 [1.500624606s] Feb 11 13:19:45.597: INFO: Created: latency-svc-hhj9d Feb 11 13:19:45.613: INFO: Got endpoints: latency-svc-hhj9d [1.399036908s] Feb 11 13:19:45.800: INFO: Created: latency-svc-kfsnj Feb 11 13:19:45.817: INFO: Got endpoints: latency-svc-kfsnj [1.593297394s] Feb 11 13:19:45.898: INFO: Created: latency-svc-zftd9 Feb 11 13:19:46.044: INFO: Got endpoints: latency-svc-zftd9 [1.789878293s] Feb 11 13:19:46.249: INFO: Created: latency-svc-bbpfg Feb 11 13:19:46.255: INFO: Got endpoints: latency-svc-bbpfg [1.960042631s] Feb 11 13:19:46.350: INFO: Created: latency-svc-xr6p8 Feb 11 13:19:46.350: INFO: Got endpoints: latency-svc-xr6p8 [1.943179343s] Feb 11 13:19:47.242: INFO: Created: latency-svc-mfgwp Feb 11 13:19:47.286: INFO: Got endpoints: latency-svc-mfgwp [2.787310909s] Feb 11 13:19:47.493: INFO: Created: latency-svc-zfknj Feb 11 13:19:47.498: INFO: Got endpoints: latency-svc-zfknj [2.823623326s] Feb 11 13:19:47.541: INFO: Created: latency-svc-rlcps Feb 11 13:19:47.655: INFO: Created: latency-svc-k6xpl Feb 11 13:19:47.657: INFO: Got endpoints: latency-svc-rlcps [2.948686973s] Feb 11 13:19:47.660: INFO: Got endpoints: latency-svc-k6xpl [2.908133174s] Feb 11 13:19:47.711: INFO: Created: latency-svc-fhfb5 Feb 11 13:19:47.712: INFO: Got endpoints: latency-svc-fhfb5 [2.844585315s] Feb 11 13:19:47.830: INFO: Created: latency-svc-ps8lr Feb 11 13:19:47.833: INFO: Got endpoints: latency-svc-ps8lr [2.909046845s] Feb 11 13:19:47.893: INFO: Created: latency-svc-bxjxr Feb 11 13:19:47.901: INFO: Got endpoints: latency-svc-bxjxr [2.870321054s] Feb 11 13:19:47.995: INFO: Created: latency-svc-grjp5 Feb 11 13:19:48.012: INFO: Got endpoints: latency-svc-grjp5 [2.905235882s] Feb 11 13:19:48.067: INFO: Created: latency-svc-qr2ns Feb 11 13:19:48.069: INFO: Got endpoints: latency-svc-qr2ns [2.820438747s] Feb 11 13:19:48.219: INFO: Created: latency-svc-nctph Feb 11 13:19:48.224: INFO: Got endpoints: latency-svc-nctph [2.673575299s] Feb 11 13:19:48.268: INFO: Created: latency-svc-8psnz Feb 11 13:19:48.273: INFO: Got endpoints: latency-svc-8psnz [2.65946022s] Feb 11 13:19:48.307: INFO: Created: latency-svc-gjmmj Feb 11 13:19:48.307: INFO: Got endpoints: latency-svc-gjmmj [2.489238668s] Feb 11 13:19:48.413: INFO: Created: latency-svc-442pr Feb 11 13:19:48.423: INFO: Got endpoints: latency-svc-442pr [2.378396969s] Feb 11 13:19:48.470: INFO: Created: latency-svc-zccgj Feb 11 13:19:48.472: INFO: Got endpoints: latency-svc-zccgj [2.21673595s] Feb 11 13:19:48.598: INFO: Created: latency-svc-7tx8l Feb 11 13:19:48.598: INFO: Got endpoints: latency-svc-7tx8l [2.247889493s] Feb 11 13:19:48.657: INFO: Created: latency-svc-wz47v Feb 11 13:19:48.827: INFO: Got endpoints: latency-svc-wz47v [1.54001113s] Feb 11 13:19:48.830: INFO: Created: latency-svc-7fsbf Feb 11 13:19:48.836: INFO: Got endpoints: latency-svc-7fsbf [1.337332024s] Feb 11 13:19:48.906: INFO: Created: latency-svc-ff92b Feb 11 13:19:49.020: INFO: Got endpoints: latency-svc-ff92b [1.363354997s] Feb 11 13:19:49.039: INFO: Created: latency-svc-96gzb Feb 11 13:19:49.071: INFO: Got endpoints: latency-svc-96gzb [1.411393835s] Feb 11 13:19:49.192: INFO: Created: latency-svc-cj8tt Feb 11 13:19:49.198: INFO: Got endpoints: latency-svc-cj8tt [1.485229632s] Feb 11 13:19:49.228: INFO: Created: latency-svc-p79p7 Feb 11 13:19:49.236: INFO: Got endpoints: latency-svc-p79p7 [1.402290629s] Feb 11 13:19:49.273: INFO: Created: latency-svc-5llmg Feb 11 13:19:49.350: INFO: Got endpoints: latency-svc-5llmg [1.448978065s] Feb 11 13:19:49.402: INFO: Created: latency-svc-7jnq8 Feb 11 13:19:49.404: INFO: Got endpoints: latency-svc-7jnq8 [1.391626237s] Feb 11 13:19:49.449: INFO: Created: latency-svc-jxvxz Feb 11 13:19:49.597: INFO: Got endpoints: latency-svc-jxvxz [1.527133728s] Feb 11 13:19:49.599: INFO: Created: latency-svc-4w2cj Feb 11 13:19:49.651: INFO: Got endpoints: latency-svc-4w2cj [1.427405838s] Feb 11 13:19:49.654: INFO: Created: latency-svc-gxlqr Feb 11 13:19:49.661: INFO: Got endpoints: latency-svc-gxlqr [1.387933158s] Feb 11 13:19:49.764: INFO: Created: latency-svc-r9w4r Feb 11 13:19:49.783: INFO: Got endpoints: latency-svc-r9w4r [1.476138507s] Feb 11 13:19:49.812: INFO: Created: latency-svc-tpfdl Feb 11 13:19:49.817: INFO: Got endpoints: latency-svc-tpfdl [1.393590476s] Feb 11 13:19:49.947: INFO: Created: latency-svc-bpktp Feb 11 13:19:49.953: INFO: Got endpoints: latency-svc-bpktp [1.48073723s] Feb 11 13:19:49.998: INFO: Created: latency-svc-6r9v4 Feb 11 13:19:50.007: INFO: Got endpoints: latency-svc-6r9v4 [1.408011806s] Feb 11 13:19:50.052: INFO: Created: latency-svc-wfln2 Feb 11 13:19:50.180: INFO: Got endpoints: latency-svc-wfln2 [1.353239219s] Feb 11 13:19:50.187: INFO: Created: latency-svc-zs8b9 Feb 11 13:19:50.195: INFO: Got endpoints: latency-svc-zs8b9 [1.35879774s] Feb 11 13:19:50.246: INFO: Created: latency-svc-pcs2t Feb 11 13:19:50.260: INFO: Got endpoints: latency-svc-pcs2t [1.239126529s] Feb 11 13:19:50.346: INFO: Created: latency-svc-52t47 Feb 11 13:19:50.362: INFO: Got endpoints: latency-svc-52t47 [1.289912262s] Feb 11 13:19:50.391: INFO: Created: latency-svc-sxmxv Feb 11 13:19:50.394: INFO: Got endpoints: latency-svc-sxmxv [1.196368455s] Feb 11 13:19:50.431: INFO: Created: latency-svc-2ttgx Feb 11 13:19:50.434: INFO: Got endpoints: latency-svc-2ttgx [1.198020214s] Feb 11 13:19:50.541: INFO: Created: latency-svc-9dwjb Feb 11 13:19:50.569: INFO: Got endpoints: latency-svc-9dwjb [1.218455408s] Feb 11 13:19:50.578: INFO: Created: latency-svc-jqsbp Feb 11 13:19:50.581: INFO: Got endpoints: latency-svc-jqsbp [1.176863193s] Feb 11 13:19:50.689: INFO: Created: latency-svc-6589f Feb 11 13:19:50.689: INFO: Got endpoints: latency-svc-6589f [1.091617506s] Feb 11 13:19:50.748: INFO: Created: latency-svc-bxp2s Feb 11 13:19:50.763: INFO: Got endpoints: latency-svc-bxp2s [1.111248758s] Feb 11 13:19:50.963: INFO: Created: latency-svc-fchtk Feb 11 13:19:50.973: INFO: Got endpoints: latency-svc-fchtk [1.312079418s] Feb 11 13:19:51.018: INFO: Created: latency-svc-q78sz Feb 11 13:19:51.033: INFO: Got endpoints: latency-svc-q78sz [1.249504201s] Feb 11 13:19:51.162: INFO: Created: latency-svc-9xjrf Feb 11 13:19:51.169: INFO: Got endpoints: latency-svc-9xjrf [1.352007939s] Feb 11 13:19:51.228: INFO: Created: latency-svc-5hgh5 Feb 11 13:19:51.326: INFO: Got endpoints: latency-svc-5hgh5 [1.372658008s] Feb 11 13:19:51.358: INFO: Created: latency-svc-whq2w Feb 11 13:19:51.364: INFO: Got endpoints: latency-svc-whq2w [1.356561661s] Feb 11 13:19:51.393: INFO: Created: latency-svc-jxlds Feb 11 13:19:51.407: INFO: Got endpoints: latency-svc-jxlds [1.226329957s] Feb 11 13:19:51.511: INFO: Created: latency-svc-rfgr2 Feb 11 13:19:51.520: INFO: Got endpoints: latency-svc-rfgr2 [1.325226208s] Feb 11 13:19:51.520: INFO: Latencies: [200.487864ms 218.128175ms 270.813438ms 399.316672ms 607.966188ms 633.655204ms 843.467045ms 1.004791982s 1.060481161s 1.091617506s 1.111248758s 1.176863193s 1.188533728s 1.196368455s 1.198020214s 1.198778819s 1.205296644s 1.215590998s 1.218455408s 1.223972231s 1.225439161s 1.226329957s 1.239126529s 1.240931804s 1.242175896s 1.249504201s 1.289912262s 1.294307117s 1.29893164s 1.304774575s 1.312079418s 1.313853479s 1.325226208s 1.325790955s 1.32889076s 1.3324686s 1.336263108s 1.337332024s 1.348562505s 1.350589794s 1.352007939s 1.353239219s 1.354796072s 1.355915055s 1.356561661s 1.35879774s 1.362475208s 1.363354997s 1.366684701s 1.368615943s 1.368697337s 1.372658008s 1.374188181s 1.387015738s 1.387933158s 1.391626237s 1.393590476s 1.399036908s 1.402290629s 1.402469657s 1.408011806s 1.411111951s 1.411393835s 1.411635926s 1.413804812s 1.415000996s 1.416949695s 1.419818934s 1.421944582s 1.425110477s 1.427405838s 1.429992502s 1.439926603s 1.448978065s 1.459073887s 1.468434556s 1.46953591s 1.474292878s 1.476138507s 1.476697572s 1.48073723s 1.485229632s 1.485922641s 1.490904041s 1.492766538s 1.496133963s 1.500624606s 1.50066409s 1.505394863s 1.512225567s 1.51550584s 1.516169612s 1.516750021s 1.527133728s 1.534206381s 1.536918457s 1.537730495s 1.539167351s 1.54001113s 1.552486322s 1.556123898s 1.561309951s 1.561896227s 1.573431797s 1.576776602s 1.576805849s 1.578608981s 1.582178645s 1.585538254s 1.593297394s 1.594637232s 1.598133117s 1.599277434s 1.604292131s 1.604427094s 1.613375338s 1.615638298s 1.620086783s 1.621668219s 1.628395525s 1.630001693s 1.638648392s 1.641251887s 1.642436219s 1.644318242s 1.649435575s 1.651641082s 1.65281676s 1.654271243s 1.65515802s 1.65546389s 1.660928815s 1.661360335s 1.665239022s 1.66685059s 1.66746137s 1.667816389s 1.667874366s 1.676174146s 1.6797254s 1.684121437s 1.685228353s 1.688696909s 1.696103269s 1.698311067s 1.69864333s 1.699567622s 1.6996318s 1.704833917s 1.709584594s 1.716141922s 1.716890086s 1.722038697s 1.724165445s 1.725768549s 1.729126662s 1.731907898s 1.732845837s 1.73348681s 1.735555193s 1.740854475s 1.746593907s 1.749177033s 1.756340181s 1.772390106s 1.787602416s 1.789878293s 1.807515852s 1.809627186s 1.833273559s 1.840360218s 1.840736102s 1.841360069s 1.847384032s 1.852176212s 1.868330822s 1.886521549s 1.886711004s 1.905765781s 1.936637364s 1.943179343s 1.960042631s 1.966453435s 1.969401522s 1.995378696s 2.21673595s 2.247889493s 2.378396969s 2.489238668s 2.65946022s 2.673575299s 2.787310909s 2.820438747s 2.823623326s 2.844585315s 2.870321054s 2.905235882s 2.908133174s 2.909046845s 2.948686973s] Feb 11 13:19:51.521: INFO: 50 %ile: 1.556123898s Feb 11 13:19:51.521: INFO: 90 %ile: 1.943179343s Feb 11 13:19:51.521: INFO: 99 %ile: 2.909046845s Feb 11 13:19:51.521: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:19:51.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9218" for this suite. Feb 11 13:20:29.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:20:29.713: INFO: namespace svc-latency-9218 deletion completed in 38.172669617s • [SLOW TEST:67.302 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:20:29.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-466e184e-6d96-4e14-ab83-a88f29586221 STEP: Creating secret with name secret-projected-all-test-volume-36be043a-e12c-459f-a5be-fef7fd234d42 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 11 13:20:29.878: INFO: Waiting up to 5m0s for pod "projected-volume-c0295081-e38f-4893-9d89-f707561ca698" in namespace "projected-792" to be "success or failure" Feb 11 13:20:29.928: INFO: Pod "projected-volume-c0295081-e38f-4893-9d89-f707561ca698": Phase="Pending", Reason="", readiness=false. Elapsed: 49.701422ms Feb 11 13:20:31.937: INFO: Pod "projected-volume-c0295081-e38f-4893-9d89-f707561ca698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058365638s Feb 11 13:20:33.952: INFO: Pod "projected-volume-c0295081-e38f-4893-9d89-f707561ca698": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073998575s Feb 11 13:20:35.966: INFO: Pod "projected-volume-c0295081-e38f-4893-9d89-f707561ca698": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087967142s Feb 11 13:20:37.976: INFO: Pod "projected-volume-c0295081-e38f-4893-9d89-f707561ca698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097349668s STEP: Saw pod success Feb 11 13:20:37.976: INFO: Pod "projected-volume-c0295081-e38f-4893-9d89-f707561ca698" satisfied condition "success or failure" Feb 11 13:20:37.981: INFO: Trying to get logs from node iruya-node pod projected-volume-c0295081-e38f-4893-9d89-f707561ca698 container projected-all-volume-test: STEP: delete the pod Feb 11 13:20:38.077: INFO: Waiting for pod projected-volume-c0295081-e38f-4893-9d89-f707561ca698 to disappear Feb 11 13:20:38.244: INFO: Pod projected-volume-c0295081-e38f-4893-9d89-f707561ca698 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:20:38.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-792" for this suite. Feb 11 13:20:44.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:20:44.462: INFO: namespace projected-792 deletion completed in 6.204631825s • [SLOW TEST:14.749 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:20:44.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cbf75035-2948-4fcc-909e-28b1de991786 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-cbf75035-2948-4fcc-909e-28b1de991786 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:22:04.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1960" for this suite. Feb 11 13:22:26.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:22:26.600: INFO: namespace projected-1960 deletion completed in 22.215753493s • [SLOW TEST:102.137 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:22:26.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 11 13:22:26.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-425' Feb 11 13:22:29.213: INFO: stderr: "" Feb 11 13:22:29.213: INFO: stdout: "pod/pause created\n" Feb 11 13:22:29.213: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 11 13:22:29.214: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-425" to be "running and ready" Feb 11 13:22:29.221: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.191176ms Feb 11 13:22:31.226: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012840431s Feb 11 13:22:33.251: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037540517s Feb 11 13:22:35.259: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045226414s Feb 11 13:22:37.270: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.056544364s Feb 11 13:22:37.270: INFO: Pod "pause" satisfied condition "running and ready" Feb 11 13:22:37.270: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 11 13:22:37.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-425' Feb 11 13:22:37.449: INFO: stderr: "" Feb 11 13:22:37.449: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 11 13:22:37.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-425' Feb 11 13:22:37.626: INFO: stderr: "" Feb 11 13:22:37.626: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 11 13:22:37.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-425' Feb 11 13:22:37.735: INFO: stderr: "" Feb 11 13:22:37.735: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 11 13:22:37.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-425' Feb 11 13:22:37.924: INFO: stderr: "" Feb 11 13:22:37.925: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 11 13:22:37.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-425' Feb 11 13:22:38.109: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 13:22:38.109: INFO: stdout: "pod \"pause\" force deleted\n" Feb 11 13:22:38.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-425' Feb 11 13:22:38.351: INFO: stderr: "No resources found.\n" Feb 11 13:22:38.352: INFO: stdout: "" Feb 11 13:22:38.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-425 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 11 13:22:38.541: INFO: stderr: "" Feb 11 13:22:38.541: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:22:38.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-425" for this suite. Feb 11 13:22:44.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:22:44.763: INFO: namespace kubectl-425 deletion completed in 6.210467511s • [SLOW TEST:18.162 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:22:44.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-47a9c155-292d-4f97-87b3-e56f915a29e0 STEP: Creating a pod to test consume secrets Feb 11 13:22:44.933: INFO: Waiting up to 5m0s for pod "pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082" in namespace "secrets-9703" to be "success or failure" Feb 11 13:22:44.984: INFO: Pod "pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082": Phase="Pending", Reason="", readiness=false. Elapsed: 51.071625ms Feb 11 13:22:46.994: INFO: Pod "pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061455532s Feb 11 13:22:49.359: INFO: Pod "pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082": Phase="Pending", Reason="", readiness=false. Elapsed: 4.426546154s Feb 11 13:22:51.371: INFO: Pod "pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438644232s Feb 11 13:22:53.383: INFO: Pod "pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.45059064s STEP: Saw pod success Feb 11 13:22:53.384: INFO: Pod "pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082" satisfied condition "success or failure" Feb 11 13:22:53.391: INFO: Trying to get logs from node iruya-node pod pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082 container secret-volume-test: STEP: delete the pod Feb 11 13:22:53.581: INFO: Waiting for pod pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082 to disappear Feb 11 13:22:53.588: INFO: Pod pod-secrets-73fd2b95-74f5-48b4-9794-d907370d5082 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:22:53.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9703" for this suite. Feb 11 13:22:59.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:22:59.826: INFO: namespace secrets-9703 deletion completed in 6.230896931s • [SLOW TEST:15.062 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:22:59.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 11 13:22:59.965: INFO: Waiting up to 5m0s for pod "pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8" in namespace "emptydir-6536" to be "success or failure" Feb 11 13:23:00.099: INFO: Pod "pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 133.535467ms Feb 11 13:23:02.116: INFO: Pod "pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150260021s Feb 11 13:23:04.124: INFO: Pod "pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159061036s Feb 11 13:23:06.134: INFO: Pod "pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168256123s Feb 11 13:23:08.143: INFO: Pod "pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177820502s STEP: Saw pod success Feb 11 13:23:08.143: INFO: Pod "pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8" satisfied condition "success or failure" Feb 11 13:23:08.148: INFO: Trying to get logs from node iruya-node pod pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8 container test-container: STEP: delete the pod Feb 11 13:23:08.204: INFO: Waiting for pod pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8 to disappear Feb 11 13:23:08.232: INFO: Pod pod-cd13cbb4-df93-49a6-9e3a-c508892f7cd8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:23:08.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6536" for this suite. Feb 11 13:23:14.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:23:14.413: INFO: namespace emptydir-6536 deletion completed in 6.159378508s • [SLOW TEST:14.587 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:23:14.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-mhls STEP: Creating a pod to test atomic-volume-subpath Feb 11 13:23:14.609: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mhls" in namespace "subpath-6825" to be "success or failure" Feb 11 13:23:14.627: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Pending", Reason="", readiness=false. Elapsed: 17.653046ms Feb 11 13:23:16.646: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036219134s Feb 11 13:23:18.655: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045174634s Feb 11 13:23:20.668: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057857389s Feb 11 13:23:22.679: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 8.069216697s Feb 11 13:23:24.692: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 10.082514326s Feb 11 13:23:26.711: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 12.101564493s Feb 11 13:23:28.722: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 14.111942052s Feb 11 13:23:30.731: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 16.121361977s Feb 11 13:23:32.739: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 18.129530173s Feb 11 13:23:34.749: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 20.138878542s Feb 11 13:23:36.759: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 22.149236298s Feb 11 13:23:38.767: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 24.157395481s Feb 11 13:23:40.789: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Running", Reason="", readiness=true. Elapsed: 26.179111235s Feb 11 13:23:42.818: INFO: Pod "pod-subpath-test-downwardapi-mhls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.20787464s STEP: Saw pod success Feb 11 13:23:42.818: INFO: Pod "pod-subpath-test-downwardapi-mhls" satisfied condition "success or failure" Feb 11 13:23:42.826: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-mhls container test-container-subpath-downwardapi-mhls: STEP: delete the pod Feb 11 13:23:43.132: INFO: Waiting for pod pod-subpath-test-downwardapi-mhls to disappear Feb 11 13:23:43.148: INFO: Pod pod-subpath-test-downwardapi-mhls no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mhls Feb 11 13:23:43.149: INFO: Deleting pod "pod-subpath-test-downwardapi-mhls" in namespace "subpath-6825" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:23:43.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6825" for this suite. Feb 11 13:23:49.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:23:49.329: INFO: namespace subpath-6825 deletion completed in 6.160060481s • [SLOW TEST:34.915 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:23:49.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 11 13:23:49.489: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 11 13:23:52.796: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:23:53.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-743" for this suite. Feb 11 13:24:00.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:24:00.325: INFO: namespace replication-controller-743 deletion completed in 6.826118598s • [SLOW TEST:10.996 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:24:00.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 11 13:24:01.974: INFO: Waiting up to 5m0s for pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794" in namespace "downward-api-1164" to be "success or failure" Feb 11 13:24:01.987: INFO: Pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794": Phase="Pending", Reason="", readiness=false. Elapsed: 12.417067ms Feb 11 13:24:05.038: INFO: Pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06383443s Feb 11 13:24:07.049: INFO: Pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794": Phase="Pending", Reason="", readiness=false. Elapsed: 5.074484764s Feb 11 13:24:09.223: INFO: Pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794": Phase="Pending", Reason="", readiness=false. Elapsed: 7.248710356s Feb 11 13:24:11.238: INFO: Pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794": Phase="Pending", Reason="", readiness=false. Elapsed: 9.26328174s Feb 11 13:24:13.248: INFO: Pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.273009705s STEP: Saw pod success Feb 11 13:24:13.248: INFO: Pod "downward-api-83673d69-3aca-4059-a3c4-70bd124c7794" satisfied condition "success or failure" Feb 11 13:24:13.252: INFO: Trying to get logs from node iruya-node pod downward-api-83673d69-3aca-4059-a3c4-70bd124c7794 container dapi-container: STEP: delete the pod Feb 11 13:24:13.439: INFO: Waiting for pod downward-api-83673d69-3aca-4059-a3c4-70bd124c7794 to disappear Feb 11 13:24:13.455: INFO: Pod downward-api-83673d69-3aca-4059-a3c4-70bd124c7794 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:24:13.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1164" for this suite. Feb 11 13:24:19.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:24:19.677: INFO: namespace downward-api-1164 deletion completed in 6.204932069s • [SLOW TEST:19.351 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:24:19.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 11 13:24:19.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94" in namespace "downward-api-975" to be "success or failure" Feb 11 13:24:19.897: INFO: Pod "downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94": Phase="Pending", Reason="", readiness=false. Elapsed: 19.171943ms Feb 11 13:24:21.906: INFO: Pod "downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027872338s Feb 11 13:24:23.926: INFO: Pod "downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048690853s Feb 11 13:24:25.936: INFO: Pod "downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05817227s Feb 11 13:24:27.946: INFO: Pod "downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068742809s STEP: Saw pod success Feb 11 13:24:27.947: INFO: Pod "downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94" satisfied condition "success or failure" Feb 11 13:24:27.950: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94 container client-container: STEP: delete the pod Feb 11 13:24:28.110: INFO: Waiting for pod downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94 to disappear Feb 11 13:24:28.127: INFO: Pod downwardapi-volume-8e03fa9d-6f83-4882-8861-f2211be19e94 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:24:28.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-975" for this suite. Feb 11 13:24:34.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:24:34.406: INFO: namespace downward-api-975 deletion completed in 6.223069471s • [SLOW TEST:14.729 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:24:34.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6894.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6894.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 11 13:24:46.570: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-6894/dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3: the server could not find the requested resource (get pods dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3) Feb 11 13:24:46.586: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-6894/dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3: the server could not find the requested resource (get pods dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3) Feb 11 13:24:46.596: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6894/dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3: the server could not find the requested resource (get pods dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3) Feb 11 13:24:46.605: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6894/dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3: the server could not find the requested resource (get pods dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3) Feb 11 13:24:46.632: INFO: Lookups using dns-6894/dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord] Feb 11 13:24:51.718: INFO: DNS probes using dns-6894/dns-test-4f2a4b80-489c-4e8f-99a0-c55c66e072d3 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:24:51.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6894" for this suite. Feb 11 13:24:59.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:25:00.049: INFO: namespace dns-6894 deletion completed in 8.165328272s • [SLOW TEST:25.643 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:25:00.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 11 13:25:17.218: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 13:25:17.223: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 13:25:19.223: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 13:25:19.248: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 13:25:21.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 13:25:21.235: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 13:25:23.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 13:25:23.232: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 13:25:25.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 13:25:25.232: INFO: Pod pod-with-poststart-http-hook still exists Feb 11 13:25:27.224: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 11 13:25:27.234: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:25:27.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2710" for this suite. Feb 11 13:25:49.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:25:49.474: INFO: namespace container-lifecycle-hook-2710 deletion completed in 22.231632205s • [SLOW TEST:49.424 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:25:49.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-109.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.100.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.100.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.100.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.100.211_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-109.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.100.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.100.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.100.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.100.211_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 11 13:26:01.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.917: INFO: Unable to read wheezy_tcp@dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.924: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.932: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.939: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.944: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.949: INFO: Unable to read wheezy_udp@PodARecord from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.955: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.960: INFO: Unable to read 10.106.100.211_udp@PTR from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.964: INFO: Unable to read 10.106.100.211_tcp@PTR from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.968: INFO: Unable to read jessie_udp@dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.973: INFO: Unable to read jessie_tcp@dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.977: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.982: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.987: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.990: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-109.svc.cluster.local from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.993: INFO: Unable to read jessie_udp@PodARecord from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:01.996: INFO: Unable to read jessie_tcp@PodARecord from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:02.000: INFO: Unable to read 10.106.100.211_udp@PTR from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:02.006: INFO: Unable to read 10.106.100.211_tcp@PTR from pod dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4: the server could not find the requested resource (get pods dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4) Feb 11 13:26:02.006: INFO: Lookups using dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4 failed for: [wheezy_udp@dns-test-service.dns-109.svc.cluster.local wheezy_tcp@dns-test-service.dns-109.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-109.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-109.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-109.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-109.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.100.211_udp@PTR 10.106.100.211_tcp@PTR jessie_udp@dns-test-service.dns-109.svc.cluster.local jessie_tcp@dns-test-service.dns-109.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-109.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-109.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-109.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-109.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.100.211_udp@PTR 10.106.100.211_tcp@PTR] Feb 11 13:26:07.126: INFO: DNS probes using dns-109/dns-test-05ee4f38-6ba5-49bb-94c1-f483adc42df4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:26:07.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-109" for this suite. Feb 11 13:26:13.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:26:13.789: INFO: namespace dns-109 deletion completed in 6.262886052s • [SLOW TEST:24.315 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:26:13.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f in namespace container-probe-1034 Feb 11 13:26:21.970: INFO: Started pod liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f in namespace container-probe-1034 STEP: checking the pod's current state and verifying that restartCount is present Feb 11 13:26:21.974: INFO: Initial restart count of pod liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f is 0 Feb 11 13:26:44.084: INFO: Restart count of pod container-probe-1034/liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f is now 1 (22.109394106s elapsed) Feb 11 13:27:04.174: INFO: Restart count of pod container-probe-1034/liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f is now 2 (42.200233378s elapsed) Feb 11 13:27:22.258: INFO: Restart count of pod container-probe-1034/liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f is now 3 (1m0.283407987s elapsed) Feb 11 13:27:42.375: INFO: Restart count of pod container-probe-1034/liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f is now 4 (1m20.401019497s elapsed) Feb 11 13:28:57.175: INFO: Restart count of pod container-probe-1034/liveness-e70390c2-d680-4674-ba7d-9d6b7d47895f is now 5 (2m35.200969276s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:28:57.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1034" for this suite. Feb 11 13:29:03.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:29:03.467: INFO: namespace container-probe-1034 deletion completed in 6.185676518s • [SLOW TEST:169.677 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:29:03.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 11 13:29:03.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7782' Feb 11 13:29:03.788: INFO: stderr: "" Feb 11 13:29:03.789: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 11 13:29:13.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7782 -o json' Feb 11 13:29:13.985: INFO: stderr: "" Feb 11 13:29:13.985: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-11T13:29:03Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7782\",\n \"resourceVersion\": \"23948125\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7782/pods/e2e-test-nginx-pod\",\n \"uid\": \"f6114a6b-90fa-424a-863e-66662d9c6996\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6g442\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6g442\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6g442\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T13:29:03Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T13:29:11Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T13:29:11Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-11T13:29:03Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://1ea0ac2bf16a20e3fc4c1610451c3065676dbc0c7b1bfb7ca451014028215f83\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-11T13:29:10Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-11T13:29:03Z\"\n }\n}\n" STEP: replace the image in the pod Feb 11 13:29:13.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7782' Feb 11 13:29:14.380: INFO: stderr: "" Feb 11 13:29:14.380: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Feb 11 13:29:14.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7782' Feb 11 13:29:20.430: INFO: stderr: "" Feb 11 13:29:20.430: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:29:20.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7782" for this suite. Feb 11 13:29:26.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:29:26.607: INFO: namespace kubectl-7782 deletion completed in 6.155169988s • [SLOW TEST:23.139 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:29:26.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 11 13:29:27.273: INFO: created pod pod-service-account-defaultsa Feb 11 13:29:27.273: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 11 13:29:27.288: INFO: created pod pod-service-account-mountsa Feb 11 13:29:27.288: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 11 13:29:27.335: INFO: created pod pod-service-account-nomountsa Feb 11 13:29:27.335: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 11 13:29:27.344: INFO: created pod pod-service-account-defaultsa-mountspec Feb 11 13:29:27.344: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 11 13:29:27.404: INFO: created pod pod-service-account-mountsa-mountspec Feb 11 13:29:27.404: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 11 13:29:27.505: INFO: created pod pod-service-account-nomountsa-mountspec Feb 11 13:29:27.506: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 11 13:29:27.655: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 11 13:29:27.655: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 11 13:29:27.723: INFO: created pod pod-service-account-mountsa-nomountspec Feb 11 13:29:27.723: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 11 13:29:27.820: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 11 13:29:27.820: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:29:27.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-589" for this suite. Feb 11 13:30:02.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:30:02.960: INFO: namespace svcaccounts-589 deletion completed in 35.103230843s • [SLOW TEST:36.351 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:30:02.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600 Feb 11 13:30:03.115: INFO: Pod name my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600: Found 0 pods out of 1 Feb 11 13:30:08.126: INFO: Pod name my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600: Found 1 pods out of 1 Feb 11 13:30:08.126: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600" are running Feb 11 13:30:12.145: INFO: Pod "my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600-tgk2l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 13:30:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 13:30:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 13:30:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 13:30:03 +0000 UTC Reason: Message:}]) Feb 11 13:30:12.145: INFO: Trying to dial the pod Feb 11 13:30:17.226: INFO: Controller my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600: Got expected result from replica 1 [my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600-tgk2l]: "my-hostname-basic-26911152-46a5-4249-a80e-1adbbecd2600-tgk2l", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:30:17.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5337" for this suite. Feb 11 13:30:23.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:30:23.394: INFO: namespace replication-controller-5337 deletion completed in 6.158991691s • [SLOW TEST:20.432 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:30:23.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 11 13:30:23.458: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 11 13:30:23.468: INFO: Waiting for terminating namespaces to be deleted... Feb 11 13:30:23.471: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 11 13:30:23.497: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 11 13:30:23.497: INFO: Container weave ready: true, restart count 0 Feb 11 13:30:23.497: INFO: Container weave-npc ready: true, restart count 0 Feb 11 13:30:23.497: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.497: INFO: Container kube-bench ready: false, restart count 0 Feb 11 13:30:23.497: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.497: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 13:30:23.497: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 11 13:30:23.537: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.538: INFO: Container kube-apiserver ready: true, restart count 0 Feb 11 13:30:23.538: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.538: INFO: Container kube-scheduler ready: true, restart count 13 Feb 11 13:30:23.538: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.538: INFO: Container coredns ready: true, restart count 0 Feb 11 13:30:23.538: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.538: INFO: Container etcd ready: true, restart count 0 Feb 11 13:30:23.538: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 11 13:30:23.538: INFO: Container weave ready: true, restart count 0 Feb 11 13:30:23.538: INFO: Container weave-npc ready: true, restart count 0 Feb 11 13:30:23.538: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.538: INFO: Container coredns ready: true, restart count 0 Feb 11 13:30:23.538: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.538: INFO: Container kube-controller-manager ready: true, restart count 21 Feb 11 13:30:23.538: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 11 13:30:23.538: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f25c3a0dc31469], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:30:24.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5577" for this suite. Feb 11 13:30:30.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:30:30.790: INFO: namespace sched-pred-5577 deletion completed in 6.171462877s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.395 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:30:30.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 11 13:30:30.910: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948398,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 11 13:30:30.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948398,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 11 13:30:40.928: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948412,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 11 13:30:40.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948412,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 11 13:30:50.951: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948426,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 11 13:30:50.951: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948426,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 11 13:31:00.969: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948441,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 11 13:31:00.969: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-a,UID:16bab02b-5e3b-42a4-a491-cdbe8eb0a201,ResourceVersion:23948441,Generation:0,CreationTimestamp:2020-02-11 13:30:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 11 13:31:10.986: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-b,UID:59f35b8e-a702-4491-8c77-4555f5a41271,ResourceVersion:23948455,Generation:0,CreationTimestamp:2020-02-11 13:31:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 11 13:31:10.987: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-b,UID:59f35b8e-a702-4491-8c77-4555f5a41271,ResourceVersion:23948455,Generation:0,CreationTimestamp:2020-02-11 13:31:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 11 13:31:20.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-b,UID:59f35b8e-a702-4491-8c77-4555f5a41271,ResourceVersion:23948469,Generation:0,CreationTimestamp:2020-02-11 13:31:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 11 13:31:21.000: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8182,SelfLink:/api/v1/namespaces/watch-8182/configmaps/e2e-watch-test-configmap-b,UID:59f35b8e-a702-4491-8c77-4555f5a41271,ResourceVersion:23948469,Generation:0,CreationTimestamp:2020-02-11 13:31:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 11 13:31:31.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8182" for this suite. Feb 11 13:31:37.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 11 13:31:37.167: INFO: namespace watch-8182 deletion completed in 6.154426942s • [SLOW TEST:66.377 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 11 13:31:37.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 11 13:31:37.285: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 20.534552ms)
Feb 11 13:31:37.294: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.446927ms)
Feb 11 13:31:37.301: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.420891ms)
Feb 11 13:31:37.311: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.422505ms)
Feb 11 13:31:37.318: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.00872ms)
Feb 11 13:31:37.327: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.285662ms)
Feb 11 13:31:37.339: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.807198ms)
Feb 11 13:31:37.359: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.847677ms)
Feb 11 13:31:37.421: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 62.445555ms)
Feb 11 13:31:37.436: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.075457ms)
Feb 11 13:31:37.445: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.671943ms)
Feb 11 13:31:37.453: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.233774ms)
Feb 11 13:31:37.463: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.881217ms)
Feb 11 13:31:37.471: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.017685ms)
Feb 11 13:31:37.480: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.13395ms)
Feb 11 13:31:37.489: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.03044ms)
Feb 11 13:31:37.497: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.70304ms)
Feb 11 13:31:37.504: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.617872ms)
Feb 11 13:31:37.511: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.009534ms)
Feb 11 13:31:37.518: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.499326ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:31:37.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2230" for this suite.
Feb 11 13:31:43.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:31:43.703: INFO: namespace proxy-2230 deletion completed in 6.178281964s

• [SLOW TEST:6.535 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:31:43.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0211 13:32:14.481846       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 13:32:14.481: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:32:14.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7943" for this suite.
Feb 11 13:32:22.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:32:22.871: INFO: namespace gc-7943 deletion completed in 8.382245422s

• [SLOW TEST:39.168 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:32:22.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8260.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8260.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 13:32:33.979: INFO: File wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-5fb25453-e41c-445a-9d61-c0242d1b1748 contains '' instead of 'foo.example.com.'
Feb 11 13:32:33.993: INFO: File jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-5fb25453-e41c-445a-9d61-c0242d1b1748 contains '' instead of 'foo.example.com.'
Feb 11 13:32:33.993: INFO: Lookups using dns-8260/dns-test-5fb25453-e41c-445a-9d61-c0242d1b1748 failed for: [wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local]

Feb 11 13:32:39.014: INFO: DNS probes using dns-test-5fb25453-e41c-445a-9d61-c0242d1b1748 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8260.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8260.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 13:32:55.302: INFO: File wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 contains '' instead of 'bar.example.com.'
Feb 11 13:32:55.308: INFO: File jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 contains '' instead of 'bar.example.com.'
Feb 11 13:32:55.308: INFO: Lookups using dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 failed for: [wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local]

Feb 11 13:33:00.330: INFO: File wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 13:33:00.369: INFO: File jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 13:33:00.369: INFO: Lookups using dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 failed for: [wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local]

Feb 11 13:33:05.321: INFO: File wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 13:33:05.330: INFO: File jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 13:33:05.330: INFO: Lookups using dns-8260/dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 failed for: [wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local]

Feb 11 13:33:10.331: INFO: DNS probes using dns-test-bea5c7ab-110b-4ac9-9081-615685d793b6 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8260.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8260.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 13:33:24.694: INFO: File wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-57287ded-3790-4a4d-8aa5-cdf156c1dd80 contains '' instead of '10.111.182.144'
Feb 11 13:33:24.728: INFO: File jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local from pod  dns-8260/dns-test-57287ded-3790-4a4d-8aa5-cdf156c1dd80 contains '' instead of '10.111.182.144'
Feb 11 13:33:24.729: INFO: Lookups using dns-8260/dns-test-57287ded-3790-4a4d-8aa5-cdf156c1dd80 failed for: [wheezy_udp@dns-test-service-3.dns-8260.svc.cluster.local jessie_udp@dns-test-service-3.dns-8260.svc.cluster.local]

Feb 11 13:33:29.759: INFO: DNS probes using dns-test-57287ded-3790-4a4d-8aa5-cdf156c1dd80 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:33:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8260" for this suite.
Feb 11 13:33:36.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:33:36.243: INFO: namespace dns-8260 deletion completed in 6.178408572s

• [SLOW TEST:73.371 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:33:36.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 11 13:33:58.467: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:33:58.467: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:33:58.578900       9 log.go:172] (0xc00084a840) (0xc000246780) Create stream
I0211 13:33:58.579023       9 log.go:172] (0xc00084a840) (0xc000246780) Stream added, broadcasting: 1
I0211 13:33:58.631727       9 log.go:172] (0xc00084a840) Reply frame received for 1
I0211 13:33:58.631909       9 log.go:172] (0xc00084a840) (0xc000246820) Create stream
I0211 13:33:58.631937       9 log.go:172] (0xc00084a840) (0xc000246820) Stream added, broadcasting: 3
I0211 13:33:58.634930       9 log.go:172] (0xc00084a840) Reply frame received for 3
I0211 13:33:58.634996       9 log.go:172] (0xc00084a840) (0xc0018520a0) Create stream
I0211 13:33:58.635030       9 log.go:172] (0xc00084a840) (0xc0018520a0) Stream added, broadcasting: 5
I0211 13:33:58.638374       9 log.go:172] (0xc00084a840) Reply frame received for 5
I0211 13:33:58.843100       9 log.go:172] (0xc00084a840) Data frame received for 3
I0211 13:33:58.843204       9 log.go:172] (0xc000246820) (3) Data frame handling
I0211 13:33:58.843278       9 log.go:172] (0xc000246820) (3) Data frame sent
I0211 13:33:59.030427       9 log.go:172] (0xc00084a840) (0xc000246820) Stream removed, broadcasting: 3
I0211 13:33:59.030855       9 log.go:172] (0xc00084a840) Data frame received for 1
I0211 13:33:59.031343       9 log.go:172] (0xc000246780) (1) Data frame handling
I0211 13:33:59.031637       9 log.go:172] (0xc000246780) (1) Data frame sent
I0211 13:33:59.031804       9 log.go:172] (0xc00084a840) (0xc0018520a0) Stream removed, broadcasting: 5
I0211 13:33:59.032269       9 log.go:172] (0xc00084a840) (0xc000246780) Stream removed, broadcasting: 1
I0211 13:33:59.032414       9 log.go:172] (0xc00084a840) Go away received
I0211 13:33:59.032767       9 log.go:172] (0xc00084a840) (0xc000246780) Stream removed, broadcasting: 1
I0211 13:33:59.032832       9 log.go:172] (0xc00084a840) (0xc000246820) Stream removed, broadcasting: 3
I0211 13:33:59.032854       9 log.go:172] (0xc00084a840) (0xc0018520a0) Stream removed, broadcasting: 5
Feb 11 13:33:59.032: INFO: Exec stderr: ""
Feb 11 13:33:59.033: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:33:59.033: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:33:59.110023       9 log.go:172] (0xc002a4ac60) (0xc001852b40) Create stream
I0211 13:33:59.110147       9 log.go:172] (0xc002a4ac60) (0xc001852b40) Stream added, broadcasting: 1
I0211 13:33:59.120565       9 log.go:172] (0xc002a4ac60) Reply frame received for 1
I0211 13:33:59.120613       9 log.go:172] (0xc002a4ac60) (0xc0022d0960) Create stream
I0211 13:33:59.120625       9 log.go:172] (0xc002a4ac60) (0xc0022d0960) Stream added, broadcasting: 3
I0211 13:33:59.121723       9 log.go:172] (0xc002a4ac60) Reply frame received for 3
I0211 13:33:59.121761       9 log.go:172] (0xc002a4ac60) (0xc001852e60) Create stream
I0211 13:33:59.121771       9 log.go:172] (0xc002a4ac60) (0xc001852e60) Stream added, broadcasting: 5
I0211 13:33:59.124448       9 log.go:172] (0xc002a4ac60) Reply frame received for 5
I0211 13:33:59.217602       9 log.go:172] (0xc002a4ac60) Data frame received for 3
I0211 13:33:59.217678       9 log.go:172] (0xc0022d0960) (3) Data frame handling
I0211 13:33:59.217704       9 log.go:172] (0xc0022d0960) (3) Data frame sent
I0211 13:33:59.360292       9 log.go:172] (0xc002a4ac60) Data frame received for 1
I0211 13:33:59.360458       9 log.go:172] (0xc002a4ac60) (0xc0022d0960) Stream removed, broadcasting: 3
I0211 13:33:59.360680       9 log.go:172] (0xc001852b40) (1) Data frame handling
I0211 13:33:59.360720       9 log.go:172] (0xc001852b40) (1) Data frame sent
I0211 13:33:59.360763       9 log.go:172] (0xc002a4ac60) (0xc001852e60) Stream removed, broadcasting: 5
I0211 13:33:59.360866       9 log.go:172] (0xc002a4ac60) (0xc001852b40) Stream removed, broadcasting: 1
I0211 13:33:59.360900       9 log.go:172] (0xc002a4ac60) Go away received
I0211 13:33:59.361119       9 log.go:172] (0xc002a4ac60) (0xc001852b40) Stream removed, broadcasting: 1
I0211 13:33:59.361133       9 log.go:172] (0xc002a4ac60) (0xc0022d0960) Stream removed, broadcasting: 3
I0211 13:33:59.361142       9 log.go:172] (0xc002a4ac60) (0xc001852e60) Stream removed, broadcasting: 5
Feb 11 13:33:59.361: INFO: Exec stderr: ""
Feb 11 13:33:59.361: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:33:59.361: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:33:59.423396       9 log.go:172] (0xc000cd2630) (0xc0016e2a00) Create stream
I0211 13:33:59.423495       9 log.go:172] (0xc000cd2630) (0xc0016e2a00) Stream added, broadcasting: 1
I0211 13:33:59.430747       9 log.go:172] (0xc000cd2630) Reply frame received for 1
I0211 13:33:59.430808       9 log.go:172] (0xc000cd2630) (0xc000246b40) Create stream
I0211 13:33:59.430828       9 log.go:172] (0xc000cd2630) (0xc000246b40) Stream added, broadcasting: 3
I0211 13:33:59.432527       9 log.go:172] (0xc000cd2630) Reply frame received for 3
I0211 13:33:59.432582       9 log.go:172] (0xc000cd2630) (0xc0004500a0) Create stream
I0211 13:33:59.432602       9 log.go:172] (0xc000cd2630) (0xc0004500a0) Stream added, broadcasting: 5
I0211 13:33:59.437863       9 log.go:172] (0xc000cd2630) Reply frame received for 5
I0211 13:33:59.558742       9 log.go:172] (0xc000cd2630) Data frame received for 3
I0211 13:33:59.558787       9 log.go:172] (0xc000246b40) (3) Data frame handling
I0211 13:33:59.558803       9 log.go:172] (0xc000246b40) (3) Data frame sent
I0211 13:33:59.710433       9 log.go:172] (0xc000cd2630) Data frame received for 1
I0211 13:33:59.710736       9 log.go:172] (0xc000cd2630) (0xc000246b40) Stream removed, broadcasting: 3
I0211 13:33:59.710945       9 log.go:172] (0xc0016e2a00) (1) Data frame handling
I0211 13:33:59.711014       9 log.go:172] (0xc000cd2630) (0xc0004500a0) Stream removed, broadcasting: 5
I0211 13:33:59.711101       9 log.go:172] (0xc0016e2a00) (1) Data frame sent
I0211 13:33:59.711118       9 log.go:172] (0xc000cd2630) (0xc0016e2a00) Stream removed, broadcasting: 1
I0211 13:33:59.711159       9 log.go:172] (0xc000cd2630) Go away received
I0211 13:33:59.711391       9 log.go:172] (0xc000cd2630) (0xc0016e2a00) Stream removed, broadcasting: 1
I0211 13:33:59.711415       9 log.go:172] (0xc000cd2630) (0xc000246b40) Stream removed, broadcasting: 3
I0211 13:33:59.711433       9 log.go:172] (0xc000cd2630) (0xc0004500a0) Stream removed, broadcasting: 5
Feb 11 13:33:59.711: INFO: Exec stderr: ""
Feb 11 13:33:59.711: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:33:59.711: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:33:59.804188       9 log.go:172] (0xc000afa9a0) (0xc000450500) Create stream
I0211 13:33:59.804282       9 log.go:172] (0xc000afa9a0) (0xc000450500) Stream added, broadcasting: 1
I0211 13:33:59.813106       9 log.go:172] (0xc000afa9a0) Reply frame received for 1
I0211 13:33:59.813144       9 log.go:172] (0xc000afa9a0) (0xc000246c80) Create stream
I0211 13:33:59.813155       9 log.go:172] (0xc000afa9a0) (0xc000246c80) Stream added, broadcasting: 3
I0211 13:33:59.815888       9 log.go:172] (0xc000afa9a0) Reply frame received for 3
I0211 13:33:59.815923       9 log.go:172] (0xc000afa9a0) (0xc0016e2aa0) Create stream
I0211 13:33:59.815935       9 log.go:172] (0xc000afa9a0) (0xc0016e2aa0) Stream added, broadcasting: 5
I0211 13:33:59.817137       9 log.go:172] (0xc000afa9a0) Reply frame received for 5
I0211 13:33:59.932506       9 log.go:172] (0xc000afa9a0) Data frame received for 3
I0211 13:33:59.932610       9 log.go:172] (0xc000246c80) (3) Data frame handling
I0211 13:33:59.932646       9 log.go:172] (0xc000246c80) (3) Data frame sent
I0211 13:34:00.079367       9 log.go:172] (0xc000afa9a0) Data frame received for 1
I0211 13:34:00.079507       9 log.go:172] (0xc000afa9a0) (0xc000246c80) Stream removed, broadcasting: 3
I0211 13:34:00.079590       9 log.go:172] (0xc000450500) (1) Data frame handling
I0211 13:34:00.079625       9 log.go:172] (0xc000450500) (1) Data frame sent
I0211 13:34:00.080214       9 log.go:172] (0xc000afa9a0) (0xc0016e2aa0) Stream removed, broadcasting: 5
I0211 13:34:00.080550       9 log.go:172] (0xc000afa9a0) (0xc000450500) Stream removed, broadcasting: 1
I0211 13:34:00.080605       9 log.go:172] (0xc000afa9a0) Go away received
I0211 13:34:00.080914       9 log.go:172] (0xc000afa9a0) (0xc000450500) Stream removed, broadcasting: 1
I0211 13:34:00.081001       9 log.go:172] (0xc000afa9a0) (0xc000246c80) Stream removed, broadcasting: 3
I0211 13:34:00.081032       9 log.go:172] (0xc000afa9a0) (0xc0016e2aa0) Stream removed, broadcasting: 5
Feb 11 13:34:00.081: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 11 13:34:00.081: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:34:00.081: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:34:00.153011       9 log.go:172] (0xc000cd3600) (0xc0016e2f00) Create stream
I0211 13:34:00.153094       9 log.go:172] (0xc000cd3600) (0xc0016e2f00) Stream added, broadcasting: 1
I0211 13:34:00.164885       9 log.go:172] (0xc000cd3600) Reply frame received for 1
I0211 13:34:00.165081       9 log.go:172] (0xc000cd3600) (0xc001852f00) Create stream
I0211 13:34:00.165101       9 log.go:172] (0xc000cd3600) (0xc001852f00) Stream added, broadcasting: 3
I0211 13:34:00.167296       9 log.go:172] (0xc000cd3600) Reply frame received for 3
I0211 13:34:00.167361       9 log.go:172] (0xc000cd3600) (0xc0016e2fa0) Create stream
I0211 13:34:00.167374       9 log.go:172] (0xc000cd3600) (0xc0016e2fa0) Stream added, broadcasting: 5
I0211 13:34:00.169595       9 log.go:172] (0xc000cd3600) Reply frame received for 5
I0211 13:34:00.279371       9 log.go:172] (0xc000cd3600) Data frame received for 3
I0211 13:34:00.279489       9 log.go:172] (0xc001852f00) (3) Data frame handling
I0211 13:34:00.279534       9 log.go:172] (0xc001852f00) (3) Data frame sent
I0211 13:34:00.418843       9 log.go:172] (0xc000cd3600) Data frame received for 1
I0211 13:34:00.419025       9 log.go:172] (0xc000cd3600) (0xc001852f00) Stream removed, broadcasting: 3
I0211 13:34:00.419218       9 log.go:172] (0xc000cd3600) (0xc0016e2fa0) Stream removed, broadcasting: 5
I0211 13:34:00.419317       9 log.go:172] (0xc0016e2f00) (1) Data frame handling
I0211 13:34:00.419357       9 log.go:172] (0xc0016e2f00) (1) Data frame sent
I0211 13:34:00.419375       9 log.go:172] (0xc000cd3600) (0xc0016e2f00) Stream removed, broadcasting: 1
I0211 13:34:00.419423       9 log.go:172] (0xc000cd3600) Go away received
I0211 13:34:00.420178       9 log.go:172] (0xc000cd3600) (0xc0016e2f00) Stream removed, broadcasting: 1
I0211 13:34:00.420198       9 log.go:172] (0xc000cd3600) (0xc001852f00) Stream removed, broadcasting: 3
I0211 13:34:00.420218       9 log.go:172] (0xc000cd3600) (0xc0016e2fa0) Stream removed, broadcasting: 5
Feb 11 13:34:00.420: INFO: Exec stderr: ""
Feb 11 13:34:00.420: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:34:00.420: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:34:00.503142       9 log.go:172] (0xc0011a8420) (0xc00263c320) Create stream
I0211 13:34:00.503318       9 log.go:172] (0xc0011a8420) (0xc00263c320) Stream added, broadcasting: 1
I0211 13:34:00.514613       9 log.go:172] (0xc0011a8420) Reply frame received for 1
I0211 13:34:00.514711       9 log.go:172] (0xc0011a8420) (0xc0004505a0) Create stream
I0211 13:34:00.514745       9 log.go:172] (0xc0011a8420) (0xc0004505a0) Stream added, broadcasting: 3
I0211 13:34:00.516680       9 log.go:172] (0xc0011a8420) Reply frame received for 3
I0211 13:34:00.516739       9 log.go:172] (0xc0011a8420) (0xc0022d0a00) Create stream
I0211 13:34:00.516767       9 log.go:172] (0xc0011a8420) (0xc0022d0a00) Stream added, broadcasting: 5
I0211 13:34:00.519107       9 log.go:172] (0xc0011a8420) Reply frame received for 5
I0211 13:34:00.661798       9 log.go:172] (0xc0011a8420) Data frame received for 3
I0211 13:34:00.661847       9 log.go:172] (0xc0004505a0) (3) Data frame handling
I0211 13:34:00.661877       9 log.go:172] (0xc0004505a0) (3) Data frame sent
I0211 13:34:00.765736       9 log.go:172] (0xc0011a8420) (0xc0004505a0) Stream removed, broadcasting: 3
I0211 13:34:00.765894       9 log.go:172] (0xc0011a8420) Data frame received for 1
I0211 13:34:00.766109       9 log.go:172] (0xc0011a8420) (0xc0022d0a00) Stream removed, broadcasting: 5
I0211 13:34:00.766197       9 log.go:172] (0xc00263c320) (1) Data frame handling
I0211 13:34:00.766243       9 log.go:172] (0xc00263c320) (1) Data frame sent
I0211 13:34:00.766264       9 log.go:172] (0xc0011a8420) (0xc00263c320) Stream removed, broadcasting: 1
I0211 13:34:00.766290       9 log.go:172] (0xc0011a8420) Go away received
I0211 13:34:00.766671       9 log.go:172] (0xc0011a8420) (0xc00263c320) Stream removed, broadcasting: 1
I0211 13:34:00.766730       9 log.go:172] (0xc0011a8420) (0xc0004505a0) Stream removed, broadcasting: 3
I0211 13:34:00.766764       9 log.go:172] (0xc0011a8420) (0xc0022d0a00) Stream removed, broadcasting: 5
Feb 11 13:34:00.766: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 11 13:34:00.767: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:34:00.767: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:34:00.829530       9 log.go:172] (0xc0011a8dc0) (0xc00263c640) Create stream
I0211 13:34:00.829606       9 log.go:172] (0xc0011a8dc0) (0xc00263c640) Stream added, broadcasting: 1
I0211 13:34:00.840203       9 log.go:172] (0xc0011a8dc0) Reply frame received for 1
I0211 13:34:00.840337       9 log.go:172] (0xc0011a8dc0) (0xc001c400a0) Create stream
I0211 13:34:00.840352       9 log.go:172] (0xc0011a8dc0) (0xc001c400a0) Stream added, broadcasting: 3
I0211 13:34:00.842028       9 log.go:172] (0xc0011a8dc0) Reply frame received for 3
I0211 13:34:00.842067       9 log.go:172] (0xc0011a8dc0) (0xc00263c6e0) Create stream
I0211 13:34:00.842079       9 log.go:172] (0xc0011a8dc0) (0xc00263c6e0) Stream added, broadcasting: 5
I0211 13:34:00.844327       9 log.go:172] (0xc0011a8dc0) Reply frame received for 5
I0211 13:34:00.927387       9 log.go:172] (0xc0011a8dc0) Data frame received for 3
I0211 13:34:00.927473       9 log.go:172] (0xc001c400a0) (3) Data frame handling
I0211 13:34:00.927506       9 log.go:172] (0xc001c400a0) (3) Data frame sent
I0211 13:34:01.079621       9 log.go:172] (0xc0011a8dc0) (0xc001c400a0) Stream removed, broadcasting: 3
I0211 13:34:01.079931       9 log.go:172] (0xc0011a8dc0) Data frame received for 1
I0211 13:34:01.080004       9 log.go:172] (0xc00263c640) (1) Data frame handling
I0211 13:34:01.080234       9 log.go:172] (0xc00263c640) (1) Data frame sent
I0211 13:34:01.080255       9 log.go:172] (0xc0011a8dc0) (0xc00263c6e0) Stream removed, broadcasting: 5
I0211 13:34:01.080460       9 log.go:172] (0xc0011a8dc0) (0xc00263c640) Stream removed, broadcasting: 1
I0211 13:34:01.080608       9 log.go:172] (0xc0011a8dc0) Go away received
I0211 13:34:01.080964       9 log.go:172] (0xc0011a8dc0) (0xc00263c640) Stream removed, broadcasting: 1
I0211 13:34:01.080985       9 log.go:172] (0xc0011a8dc0) (0xc001c400a0) Stream removed, broadcasting: 3
I0211 13:34:01.080998       9 log.go:172] (0xc0011a8dc0) (0xc00263c6e0) Stream removed, broadcasting: 5
Feb 11 13:34:01.081: INFO: Exec stderr: ""
Feb 11 13:34:01.081: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:34:01.081: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:34:01.138734       9 log.go:172] (0xc00141e580) (0xc001853360) Create stream
I0211 13:34:01.138872       9 log.go:172] (0xc00141e580) (0xc001853360) Stream added, broadcasting: 1
I0211 13:34:01.145934       9 log.go:172] (0xc00141e580) Reply frame received for 1
I0211 13:34:01.145959       9 log.go:172] (0xc00141e580) (0xc000450a00) Create stream
I0211 13:34:01.145966       9 log.go:172] (0xc00141e580) (0xc000450a00) Stream added, broadcasting: 3
I0211 13:34:01.148139       9 log.go:172] (0xc00141e580) Reply frame received for 3
I0211 13:34:01.148168       9 log.go:172] (0xc00141e580) (0xc001c40140) Create stream
I0211 13:34:01.148181       9 log.go:172] (0xc00141e580) (0xc001c40140) Stream added, broadcasting: 5
I0211 13:34:01.149480       9 log.go:172] (0xc00141e580) Reply frame received for 5
I0211 13:34:01.257493       9 log.go:172] (0xc00141e580) Data frame received for 3
I0211 13:34:01.257651       9 log.go:172] (0xc000450a00) (3) Data frame handling
I0211 13:34:01.257691       9 log.go:172] (0xc000450a00) (3) Data frame sent
I0211 13:34:01.377822       9 log.go:172] (0xc00141e580) Data frame received for 1
I0211 13:34:01.378047       9 log.go:172] (0xc00141e580) (0xc000450a00) Stream removed, broadcasting: 3
I0211 13:34:01.378190       9 log.go:172] (0xc001853360) (1) Data frame handling
I0211 13:34:01.378278       9 log.go:172] (0xc001853360) (1) Data frame sent
I0211 13:34:01.378389       9 log.go:172] (0xc00141e580) (0xc001c40140) Stream removed, broadcasting: 5
I0211 13:34:01.378493       9 log.go:172] (0xc00141e580) (0xc001853360) Stream removed, broadcasting: 1
I0211 13:34:01.378588       9 log.go:172] (0xc00141e580) Go away received
I0211 13:34:01.379394       9 log.go:172] (0xc00141e580) (0xc001853360) Stream removed, broadcasting: 1
I0211 13:34:01.379426       9 log.go:172] (0xc00141e580) (0xc000450a00) Stream removed, broadcasting: 3
I0211 13:34:01.379490       9 log.go:172] (0xc00141e580) (0xc001c40140) Stream removed, broadcasting: 5
Feb 11 13:34:01.379: INFO: Exec stderr: ""
Feb 11 13:34:01.380: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:34:01.380: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:34:01.460554       9 log.go:172] (0xc001672000) (0xc0002475e0) Create stream
I0211 13:34:01.460615       9 log.go:172] (0xc001672000) (0xc0002475e0) Stream added, broadcasting: 1
I0211 13:34:01.468292       9 log.go:172] (0xc001672000) Reply frame received for 1
I0211 13:34:01.468336       9 log.go:172] (0xc001672000) (0xc00263c820) Create stream
I0211 13:34:01.468353       9 log.go:172] (0xc001672000) (0xc00263c820) Stream added, broadcasting: 3
I0211 13:34:01.470948       9 log.go:172] (0xc001672000) Reply frame received for 3
I0211 13:34:01.471002       9 log.go:172] (0xc001672000) (0xc000450aa0) Create stream
I0211 13:34:01.471014       9 log.go:172] (0xc001672000) (0xc000450aa0) Stream added, broadcasting: 5
I0211 13:34:01.476175       9 log.go:172] (0xc001672000) Reply frame received for 5
I0211 13:34:01.582411       9 log.go:172] (0xc001672000) Data frame received for 3
I0211 13:34:01.582513       9 log.go:172] (0xc00263c820) (3) Data frame handling
I0211 13:34:01.582571       9 log.go:172] (0xc00263c820) (3) Data frame sent
I0211 13:34:01.702753       9 log.go:172] (0xc001672000) Data frame received for 1
I0211 13:34:01.702916       9 log.go:172] (0xc001672000) (0xc00263c820) Stream removed, broadcasting: 3
I0211 13:34:01.703034       9 log.go:172] (0xc0002475e0) (1) Data frame handling
I0211 13:34:01.703110       9 log.go:172] (0xc0002475e0) (1) Data frame sent
I0211 13:34:01.703132       9 log.go:172] (0xc001672000) (0xc000450aa0) Stream removed, broadcasting: 5
I0211 13:34:01.703278       9 log.go:172] (0xc001672000) (0xc0002475e0) Stream removed, broadcasting: 1
I0211 13:34:01.703366       9 log.go:172] (0xc001672000) Go away received
I0211 13:34:01.704113       9 log.go:172] (0xc001672000) (0xc0002475e0) Stream removed, broadcasting: 1
I0211 13:34:01.704372       9 log.go:172] (0xc001672000) (0xc00263c820) Stream removed, broadcasting: 3
I0211 13:34:01.704430       9 log.go:172] (0xc001672000) (0xc000450aa0) Stream removed, broadcasting: 5
Feb 11 13:34:01.704: INFO: Exec stderr: ""
Feb 11 13:34:01.704: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1253 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 13:34:01.704: INFO: >>> kubeConfig: /root/.kube/config
I0211 13:34:01.807052       9 log.go:172] (0xc00176a370) (0xc0016e3400) Create stream
I0211 13:34:01.807212       9 log.go:172] (0xc00176a370) (0xc0016e3400) Stream added, broadcasting: 1
I0211 13:34:01.813002       9 log.go:172] (0xc00176a370) Reply frame received for 1
I0211 13:34:01.813048       9 log.go:172] (0xc00176a370) (0xc000450be0) Create stream
I0211 13:34:01.813069       9 log.go:172] (0xc00176a370) (0xc000450be0) Stream added, broadcasting: 3
I0211 13:34:01.814967       9 log.go:172] (0xc00176a370) Reply frame received for 3
I0211 13:34:01.815165       9 log.go:172] (0xc00176a370) (0xc000450dc0) Create stream
I0211 13:34:01.815179       9 log.go:172] (0xc00176a370) (0xc000450dc0) Stream added, broadcasting: 5
I0211 13:34:01.816778       9 log.go:172] (0xc00176a370) Reply frame received for 5
I0211 13:34:01.930416       9 log.go:172] (0xc00176a370) Data frame received for 3
I0211 13:34:01.930648       9 log.go:172] (0xc000450be0) (3) Data frame handling
I0211 13:34:01.930695       9 log.go:172] (0xc000450be0) (3) Data frame sent
I0211 13:34:02.127394       9 log.go:172] (0xc00176a370) (0xc000450be0) Stream removed, broadcasting: 3
I0211 13:34:02.127813       9 log.go:172] (0xc00176a370) Data frame received for 1
I0211 13:34:02.128136       9 log.go:172] (0xc00176a370) (0xc000450dc0) Stream removed, broadcasting: 5
I0211 13:34:02.128346       9 log.go:172] (0xc0016e3400) (1) Data frame handling
I0211 13:34:02.128541       9 log.go:172] (0xc0016e3400) (1) Data frame sent
I0211 13:34:02.128575       9 log.go:172] (0xc00176a370) (0xc0016e3400) Stream removed, broadcasting: 1
I0211 13:34:02.128672       9 log.go:172] (0xc00176a370) Go away received
I0211 13:34:02.129061       9 log.go:172] (0xc00176a370) (0xc0016e3400) Stream removed, broadcasting: 1
I0211 13:34:02.129096       9 log.go:172] (0xc00176a370) (0xc000450be0) Stream removed, broadcasting: 3
I0211 13:34:02.129108       9 log.go:172] (0xc00176a370) (0xc000450dc0) Stream removed, broadcasting: 5
Feb 11 13:34:02.129: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:34:02.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1253" for this suite.
Feb 11 13:34:46.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:34:46.326: INFO: namespace e2e-kubelet-etc-hosts-1253 deletion completed in 44.184929415s

• [SLOW TEST:70.079 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:34:46.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 11 13:34:46.469: INFO: Waiting up to 5m0s for pod "pod-27d17214-09aa-4e0f-a820-75044dab5990" in namespace "emptydir-3265" to be "success or failure"
Feb 11 13:34:46.490: INFO: Pod "pod-27d17214-09aa-4e0f-a820-75044dab5990": Phase="Pending", Reason="", readiness=false. Elapsed: 20.593548ms
Feb 11 13:34:48.514: INFO: Pod "pod-27d17214-09aa-4e0f-a820-75044dab5990": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044721654s
Feb 11 13:34:50.535: INFO: Pod "pod-27d17214-09aa-4e0f-a820-75044dab5990": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065110825s
Feb 11 13:34:52.551: INFO: Pod "pod-27d17214-09aa-4e0f-a820-75044dab5990": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081444188s
Feb 11 13:34:54.596: INFO: Pod "pod-27d17214-09aa-4e0f-a820-75044dab5990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12607141s
STEP: Saw pod success
Feb 11 13:34:54.596: INFO: Pod "pod-27d17214-09aa-4e0f-a820-75044dab5990" satisfied condition "success or failure"
Feb 11 13:34:54.604: INFO: Trying to get logs from node iruya-node pod pod-27d17214-09aa-4e0f-a820-75044dab5990 container test-container: 
STEP: delete the pod
Feb 11 13:34:54.686: INFO: Waiting for pod pod-27d17214-09aa-4e0f-a820-75044dab5990 to disappear
Feb 11 13:34:54.802: INFO: Pod pod-27d17214-09aa-4e0f-a820-75044dab5990 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:34:54.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3265" for this suite.
Feb 11 13:35:00.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:35:01.033: INFO: namespace emptydir-3265 deletion completed in 6.22524416s

• [SLOW TEST:14.707 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:35:01.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-ef9a73ca-4f26-4454-a996-62662636c6f6
STEP: Creating a pod to test consume configMaps
Feb 11 13:35:01.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e" in namespace "configmap-1862" to be "success or failure"
Feb 11 13:35:01.326: INFO: Pod "pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.603254ms
Feb 11 13:35:03.336: INFO: Pod "pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020020196s
Feb 11 13:35:05.353: INFO: Pod "pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03709744s
Feb 11 13:35:07.368: INFO: Pod "pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051428292s
Feb 11 13:35:09.383: INFO: Pod "pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06674621s
STEP: Saw pod success
Feb 11 13:35:09.383: INFO: Pod "pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e" satisfied condition "success or failure"
Feb 11 13:35:09.393: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e container configmap-volume-test: 
STEP: delete the pod
Feb 11 13:35:09.496: INFO: Waiting for pod pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e to disappear
Feb 11 13:35:09.515: INFO: Pod pod-configmaps-7d5e7694-c752-4ae9-ac91-7c1214e9665e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:35:09.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1862" for this suite.
Feb 11 13:35:15.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:35:15.765: INFO: namespace configmap-1862 deletion completed in 6.241079235s

• [SLOW TEST:14.731 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:35:15.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-311b868f-553c-436c-92c3-c617df5d59cc
STEP: Creating a pod to test consume configMaps
Feb 11 13:35:16.000: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2" in namespace "projected-5545" to be "success or failure"
Feb 11 13:35:16.030: INFO: Pod "pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.708399ms
Feb 11 13:35:18.039: INFO: Pod "pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039015267s
Feb 11 13:35:20.046: INFO: Pod "pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045694476s
Feb 11 13:35:22.060: INFO: Pod "pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05917384s
Feb 11 13:35:24.069: INFO: Pod "pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068491227s
STEP: Saw pod success
Feb 11 13:35:24.069: INFO: Pod "pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2" satisfied condition "success or failure"
Feb 11 13:35:24.082: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 13:35:24.155: INFO: Waiting for pod pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2 to disappear
Feb 11 13:35:24.158: INFO: Pod pod-projected-configmaps-90065a34-016e-4ebc-a036-2206f98ea7a2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:35:24.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5545" for this suite.
Feb 11 13:35:30.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:35:30.427: INFO: namespace projected-5545 deletion completed in 6.259104416s

• [SLOW TEST:14.662 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:35:30.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 11 13:35:30.550: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 11 13:35:30.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1530'
Feb 11 13:35:32.839: INFO: stderr: ""
Feb 11 13:35:32.840: INFO: stdout: "service/redis-slave created\n"
Feb 11 13:35:32.841: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 11 13:35:32.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1530'
Feb 11 13:35:33.672: INFO: stderr: ""
Feb 11 13:35:33.672: INFO: stdout: "service/redis-master created\n"
Feb 11 13:35:33.674: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 11 13:35:33.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1530'
Feb 11 13:35:34.221: INFO: stderr: ""
Feb 11 13:35:34.221: INFO: stdout: "service/frontend created\n"
Feb 11 13:35:34.221: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 11 13:35:34.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1530'
Feb 11 13:35:34.516: INFO: stderr: ""
Feb 11 13:35:34.516: INFO: stdout: "deployment.apps/frontend created\n"
Feb 11 13:35:34.517: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 11 13:35:34.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1530'
Feb 11 13:35:35.152: INFO: stderr: ""
Feb 11 13:35:35.153: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 11 13:35:35.154: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 11 13:35:35.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1530'
Feb 11 13:35:36.976: INFO: stderr: ""
Feb 11 13:35:36.976: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 11 13:35:36.976: INFO: Waiting for all frontend pods to be Running.
Feb 11 13:35:57.030: INFO: Waiting for frontend to serve content.
Feb 11 13:35:59.665: INFO: Trying to add a new entry to the guestbook.
Feb 11 13:35:59.712: INFO: Verifying that added entry can be retrieved.
Feb 11 13:35:59.735: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Feb 11 13:36:04.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1530'
Feb 11 13:36:05.101: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:36:05.102: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 13:36:05.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1530'
Feb 11 13:36:05.420: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:36:05.420: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 13:36:05.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1530'
Feb 11 13:36:05.680: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:36:05.680: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 13:36:05.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1530'
Feb 11 13:36:05.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:36:05.892: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 13:36:05.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1530'
Feb 11 13:36:06.037: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:36:06.037: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 13:36:06.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1530'
Feb 11 13:36:06.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 13:36:06.462: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:36:06.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1530" for this suite.
Feb 11 13:36:48.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:36:48.713: INFO: namespace kubectl-1530 deletion completed in 42.233397943s

• [SLOW TEST:78.286 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:36:48.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-b9e5e3cc-42d0-4dbc-95c2-221aa2f9aa9e
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:36:48.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1948" for this suite.
Feb 11 13:36:54.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:36:55.147: INFO: namespace configmap-1948 deletion completed in 6.19161416s

• [SLOW TEST:6.431 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:36:55.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0b78ea16-c347-4a35-a6fa-accbf12130c6
STEP: Creating a pod to test consume configMaps
Feb 11 13:36:55.332: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751" in namespace "projected-4462" to be "success or failure"
Feb 11 13:36:55.338: INFO: Pod "pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751": Phase="Pending", Reason="", readiness=false. Elapsed: 5.31939ms
Feb 11 13:36:57.350: INFO: Pod "pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017718095s
Feb 11 13:36:59.365: INFO: Pod "pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03218381s
Feb 11 13:37:01.374: INFO: Pod "pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041766037s
Feb 11 13:37:03.382: INFO: Pod "pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049094289s
STEP: Saw pod success
Feb 11 13:37:03.382: INFO: Pod "pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751" satisfied condition "success or failure"
Feb 11 13:37:03.388: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 13:37:03.442: INFO: Waiting for pod pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751 to disappear
Feb 11 13:37:03.463: INFO: Pod pod-projected-configmaps-dcf4d1d6-364f-4908-ba8f-3135eb7d7751 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:37:03.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4462" for this suite.
Feb 11 13:37:09.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:37:09.619: INFO: namespace projected-4462 deletion completed in 6.150972831s

• [SLOW TEST:14.472 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:37:09.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:37:18.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1250" for this suite.
Feb 11 13:37:40.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:37:41.114: INFO: namespace replication-controller-1250 deletion completed in 22.246051991s

• [SLOW TEST:31.494 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:37:41.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 13:37:41.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92" in namespace "downward-api-9661" to be "success or failure"
Feb 11 13:37:41.360: INFO: Pod "downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92": Phase="Pending", Reason="", readiness=false. Elapsed: 73.809761ms
Feb 11 13:37:43.378: INFO: Pod "downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091853838s
Feb 11 13:37:45.386: INFO: Pod "downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100344118s
Feb 11 13:37:47.396: INFO: Pod "downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110309928s
Feb 11 13:37:49.408: INFO: Pod "downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122391532s
STEP: Saw pod success
Feb 11 13:37:49.409: INFO: Pod "downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92" satisfied condition "success or failure"
Feb 11 13:37:49.416: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92 container client-container: 
STEP: delete the pod
Feb 11 13:37:49.529: INFO: Waiting for pod downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92 to disappear
Feb 11 13:37:49.541: INFO: Pod downwardapi-volume-11ad079e-7004-4c07-8617-f12069a74e92 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:37:49.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9661" for this suite.
Feb 11 13:37:55.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:37:55.651: INFO: namespace downward-api-9661 deletion completed in 6.102324427s

• [SLOW TEST:14.537 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:37:55.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 13:37:55.712: INFO: Creating deployment "nginx-deployment"
Feb 11 13:37:55.721: INFO: Waiting for observed generation 1
Feb 11 13:37:59.466: INFO: Waiting for all required pods to come up
Feb 11 13:37:59.511: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 11 13:38:23.790: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 11 13:38:23.806: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 11 13:38:23.824: INFO: Updating deployment nginx-deployment
Feb 11 13:38:23.825: INFO: Waiting for observed generation 2
Feb 11 13:38:26.486: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 11 13:38:26.652: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 11 13:38:27.156: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 11 13:38:27.225: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 11 13:38:27.225: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 11 13:38:27.231: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 11 13:38:27.239: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 11 13:38:27.240: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 11 13:38:27.406: INFO: Updating deployment nginx-deployment
Feb 11 13:38:27.406: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 11 13:38:27.485: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 11 13:38:28.937: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 11 13:38:29.455: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3628,SelfLink:/apis/apps/v1/namespaces/deployment-3628/deployments/nginx-deployment,UID:e28b66b9-b135-4070-903e-2bdde00cd0cc,ResourceVersion:23949804,Generation:3,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-11 13:38:26 +0000 UTC 2020-02-11 13:37:55 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-11 13:38:27 +0000 UTC 2020-02-11 13:38:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 11 13:38:29.684: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3628,SelfLink:/apis/apps/v1/namespaces/deployment-3628/replicasets/nginx-deployment-55fb7cb77f,UID:90761224-cf5d-4001-beac-0f66803d604c,ResourceVersion:23949800,Generation:3,CreationTimestamp:2020-02-11 13:38:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e28b66b9-b135-4070-903e-2bdde00cd0cc 0xc00286e287 0xc00286e288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 11 13:38:29.684: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 11 13:38:29.685: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3628,SelfLink:/apis/apps/v1/namespaces/deployment-3628/replicasets/nginx-deployment-7b8c6f4498,UID:19809133-d9c2-4c41-9709-d0d514109632,ResourceVersion:23949799,Generation:3,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e28b66b9-b135-4070-903e-2bdde00cd0cc 0xc00286e357 0xc00286e358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 11 13:38:30.718: INFO: Pod "nginx-deployment-55fb7cb77f-4qchc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4qchc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-4qchc,UID:5fdd0f28-259f-4f54-b307-ec1885b015da,ResourceVersion:23949796,Generation:0,CreationTimestamp:2020-02-11 13:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286ecc7 0xc00286ecc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286ed30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286ed50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-11 13:38:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.719: INFO: Pod "nginx-deployment-55fb7cb77f-cjvgw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cjvgw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-cjvgw,UID:df43f79c-dfaf-42fc-9f43-865ee992762f,ResourceVersion:23949791,Generation:0,CreationTimestamp:2020-02-11 13:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286ee27 0xc00286ee28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286eea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286eec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-11 13:38:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.720: INFO: Pod "nginx-deployment-55fb7cb77f-d8sw8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d8sw8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-d8sw8,UID:0ccf4e65-ab96-4b9e-a669-e4ba18ade445,ResourceVersion:23949817,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286ef97 0xc00286ef98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f000} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.720: INFO: Pod "nginx-deployment-55fb7cb77f-f8582" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f8582,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-f8582,UID:48d18d99-4009-404b-a4a8-5a5078ce4074,ResourceVersion:23949822,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f090 0xc00286f091}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.720: INFO: Pod "nginx-deployment-55fb7cb77f-gz4hj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gz4hj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-gz4hj,UID:ec277e33-2bce-4907-83e3-71c8f4e7141c,ResourceVersion:23949769,Generation:0,CreationTimestamp:2020-02-11 13:38:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f190 0xc00286f191}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-11 13:38:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.721: INFO: Pod "nginx-deployment-55fb7cb77f-l9drb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l9drb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-l9drb,UID:a017c19a-b7b4-4ec1-ae2e-946ba26a8199,ResourceVersion:23949819,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f2f7 0xc00286f2f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.722: INFO: Pod "nginx-deployment-55fb7cb77f-mzzvq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mzzvq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-mzzvq,UID:c48d5e90-b1f0-4bff-8954-4a2dc82f719b,ResourceVersion:23949815,Generation:0,CreationTimestamp:2020-02-11 13:38:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f3f0 0xc00286f3f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:29 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.722: INFO: Pod "nginx-deployment-55fb7cb77f-nzdls" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nzdls,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-nzdls,UID:1cb6f0dd-55c9-44fe-b711-4f3d66b5c9b6,ResourceVersion:23949767,Generation:0,CreationTimestamp:2020-02-11 13:38:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f507 0xc00286f508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f5a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-11 13:38:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.723: INFO: Pod "nginx-deployment-55fb7cb77f-rd2xg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rd2xg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-rd2xg,UID:d4c17f6a-bb9a-4cfc-a2c8-55144d7d175c,ResourceVersion:23949823,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f677 0xc00286f678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f6e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.723: INFO: Pod "nginx-deployment-55fb7cb77f-rjjrh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rjjrh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-rjjrh,UID:644a9697-79f4-40cf-8a90-f57486b2cc9a,ResourceVersion:23949827,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f770 0xc00286f771}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f7e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:29 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.723: INFO: Pod "nginx-deployment-55fb7cb77f-xshqf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xshqf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-xshqf,UID:ef60dba5-573d-446b-809b-b2d07b581da4,ResourceVersion:23949782,Generation:0,CreationTimestamp:2020-02-11 13:38:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f887 0xc00286f888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286f900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286f920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-11 13:38:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.724: INFO: Pod "nginx-deployment-55fb7cb77f-z955f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z955f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-55fb7cb77f-z955f,UID:d5e6b13a-bea8-4a78-921f-c383a7b25c53,ResourceVersion:23949826,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90761224-cf5d-4001-beac-0f66803d604c 0xc00286f9f7 0xc00286f9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286fa70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286fa90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:29 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.725: INFO: Pod "nginx-deployment-7b8c6f4498-2crnb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2crnb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-2crnb,UID:3d7f1b3d-6f67-47fb-b38f-883e16dd2616,ResourceVersion:23949705,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc00286fb17 0xc00286fb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286fb90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286fbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-11 13:37:56 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://17640109f029636fa751e5ab79fb2912b786f2c7ba4bd2732f0c57e5f8bb27d1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.725: INFO: Pod "nginx-deployment-7b8c6f4498-42w6w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-42w6w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-42w6w,UID:24f98ac7-f3e8-409c-88f1-93041c09431f,ResourceVersion:23949821,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc00286fc87 0xc00286fc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286fcf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286fd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.726: INFO: Pod "nginx-deployment-7b8c6f4498-6sp77" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6sp77,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-6sp77,UID:347dcca4-0ba2-43dd-84c7-222b20b24690,ResourceVersion:23949824,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc00286fd80 0xc00286fd81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286fde0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286fe00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.726: INFO: Pod "nginx-deployment-7b8c6f4498-7t6xq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7t6xq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-7t6xq,UID:b0c797f7-4cf6-4c1d-9ea9-d09e037430e4,ResourceVersion:23949691,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc00286fe70 0xc00286fe71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00286fee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00286ff00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-11 13:37:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4f44d8a3392d81677a03acfd5593323fbbcf6f0c7319102d70f32286e05c7c4f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.727: INFO: Pod "nginx-deployment-7b8c6f4498-8mt8v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8mt8v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-8mt8v,UID:9c440bc3-c444-47a1-ba01-caec089d7ba1,ResourceVersion:23949818,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc00286ffd7 0xc00286ffd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002592050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002592070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:29 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.727: INFO: Pod "nginx-deployment-7b8c6f4498-hwzcr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hwzcr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-hwzcr,UID:fe8b3d6a-1f69-4eb5-a14b-93aa6a1aad92,ResourceVersion:23949710,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc002592127 0xc002592128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025921a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025921c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-11 13:37:56 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e33c0e8e52221305f2858491f8e291fbeb2675007a1046e7591fe7f842498fcc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.727: INFO: Pod "nginx-deployment-7b8c6f4498-hzjh8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hzjh8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-hzjh8,UID:073f327b-b7a2-48d5-9b05-4dbe057a5761,ResourceVersion:23949685,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc002592417 0xc002592418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002592490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025924b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-11 13:37:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b65dc93a04f610419b803c84863864d74d8a3fc512bf2b9194ba80705f189624}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.728: INFO: Pod "nginx-deployment-7b8c6f4498-jxqhg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jxqhg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-jxqhg,UID:cc352146-5c03-493f-b393-029be5f608f6,ResourceVersion:23949728,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc002592637 0xc002592638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002592720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002592740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-11 13:37:56 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://880462e3c8761fb13a5320f596dd8320559540819e9c15e557d85f9c04f8eacc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.728: INFO: Pod "nginx-deployment-7b8c6f4498-kxrfn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kxrfn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-kxrfn,UID:922f04ef-f4de-4c8a-8f5b-a6cb08adb1c4,ResourceVersion:23949828,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc0025928c7 0xc0025928c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002592950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002592970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:29 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.729: INFO: Pod "nginx-deployment-7b8c6f4498-lr46d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lr46d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-lr46d,UID:6d9b80b3-f0f6-45e2-82d2-1f3cbe8af086,ResourceVersion:23949731,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc002592a77 0xc002592a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002592b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002592b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-11 13:37:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://baadc6007a601d2346e2b3511edfff8a70981866bb1b7ca90645fdf8eb3867a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.729: INFO: Pod "nginx-deployment-7b8c6f4498-mfh56" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mfh56,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-mfh56,UID:44e553d5-408a-4ffb-aaca-2dd9c6a4e4c1,ResourceVersion:23949720,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc002592c07 0xc002592c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002592c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002592c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-11 13:37:56 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://567a321c213803a0e663f3e9036434487c413a31d80ed591f302a8edf288406e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.730: INFO: Pod "nginx-deployment-7b8c6f4498-mgjvv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mgjvv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-mgjvv,UID:5eaec4ee-819c-4167-b57d-e0fad10d529b,ResourceVersion:23949814,Generation:0,CreationTimestamp:2020-02-11 13:38:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc002593007 0xc002593008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002593070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002593090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:29 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.730: INFO: Pod "nginx-deployment-7b8c6f4498-mlm96" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mlm96,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-mlm96,UID:951c58b8-16c5-4248-88e0-5c5a563ba028,ResourceVersion:23949825,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc002593177 0xc002593178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002593210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002593230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.730: INFO: Pod "nginx-deployment-7b8c6f4498-mrdw5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mrdw5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-mrdw5,UID:a3f67497-2121-49ad-8a16-3ff31d681ef9,ResourceVersion:23949820,Generation:0,CreationTimestamp:2020-02-11 13:38:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc0025932a0 0xc0025932a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002593300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002593320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 13:38:30.730: INFO: Pod "nginx-deployment-7b8c6f4498-zzh4k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zzh4k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3628,SelfLink:/api/v1/namespaces/deployment-3628/pods/nginx-deployment-7b8c6f4498-zzh4k,UID:e866f981-5a06-41bf-8166-37ada7acf46b,ResourceVersion:23949698,Generation:0,CreationTimestamp:2020-02-11 13:37:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19809133-d9c2-4c41-9709-d0d514109632 0xc0025933a0 0xc0025933a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tn5nz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tn5nz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tn5nz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002593410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002593430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:38:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:37:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-11 13:37:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 13:38:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://123bba72a1e35f9b0197662e6cec64a00b435d4e31b794bd99dfc065219b993d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:38:30.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3628" for this suite.
Feb 11 13:39:45.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:39:45.811: INFO: namespace deployment-3628 deletion completed in 1m11.470482084s

• [SLOW TEST:110.159 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:39:45.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-7fff1675-1a82-487f-a3d7-4ad37d30520f in namespace container-probe-8937
Feb 11 13:40:10.980: INFO: Started pod liveness-7fff1675-1a82-487f-a3d7-4ad37d30520f in namespace container-probe-8937
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 13:40:10.984: INFO: Initial restart count of pod liveness-7fff1675-1a82-487f-a3d7-4ad37d30520f is 0
Feb 11 13:40:27.061: INFO: Restart count of pod container-probe-8937/liveness-7fff1675-1a82-487f-a3d7-4ad37d30520f is now 1 (16.076449049s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:40:27.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8937" for this suite.
Feb 11 13:40:33.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:40:33.336: INFO: namespace container-probe-8937 deletion completed in 6.198698404s

• [SLOW TEST:47.525 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:40:33.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 11 13:40:33.393: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 11 13:40:33.407: INFO: Waiting for terminating namespaces to be deleted...
Feb 11 13:40:33.409: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 11 13:40:33.425: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 11 13:40:33.426: INFO: 	Container weave ready: true, restart count 0
Feb 11 13:40:33.426: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 13:40:33.426: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.426: INFO: 	Container kube-bench ready: false, restart count 0
Feb 11 13:40:33.426: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.426: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 13:40:33.426: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 11 13:40:33.453: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 11 13:40:33.453: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 13:40:33.453: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 11 13:40:33.453: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 11 13:40:33.453: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container coredns ready: true, restart count 0
Feb 11 13:40:33.453: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container etcd ready: true, restart count 0
Feb 11 13:40:33.453: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container weave ready: true, restart count 0
Feb 11 13:40:33.453: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 13:40:33.453: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 11 13:40:33.453: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 11 13:40:33.539: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 11 13:40:33.539: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1cbaa3f4-1a09-4fc6-9b8d-e86dc5440ba2.15f25cc819e482cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-571/filler-pod-1cbaa3f4-1a09-4fc6-9b8d-e86dc5440ba2 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1cbaa3f4-1a09-4fc6-9b8d-e86dc5440ba2.15f25cc9472532a6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1cbaa3f4-1a09-4fc6-9b8d-e86dc5440ba2.15f25cca18b3e0e7], Reason = [Created], Message = [Created container filler-pod-1cbaa3f4-1a09-4fc6-9b8d-e86dc5440ba2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1cbaa3f4-1a09-4fc6-9b8d-e86dc5440ba2.15f25cca3d1ecf34], Reason = [Started], Message = [Started container filler-pod-1cbaa3f4-1a09-4fc6-9b8d-e86dc5440ba2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb288bab-aa27-4a5d-a724-be452865906f.15f25cc814821fc3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-571/filler-pod-bb288bab-aa27-4a5d-a724-be452865906f to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb288bab-aa27-4a5d-a724-be452865906f.15f25cc932d301fd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb288bab-aa27-4a5d-a724-be452865906f.15f25cc9e3783e86], Reason = [Created], Message = [Created container filler-pod-bb288bab-aa27-4a5d-a724-be452865906f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb288bab-aa27-4a5d-a724-be452865906f.15f25cca0e7e066d], Reason = [Started], Message = [Started container filler-pod-bb288bab-aa27-4a5d-a724-be452865906f]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f25ccae8808038], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:40:46.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-571" for this suite.
Feb 11 13:40:54.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:40:54.978: INFO: namespace sched-pred-571 deletion completed in 8.146810439s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.642 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:40:54.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 11 13:40:57.721: INFO: Pod name wrapped-volume-race-8f519d7d-0dac-4ec3-a990-a11caa99722e: Found 0 pods out of 5
Feb 11 13:41:02.754: INFO: Pod name wrapped-volume-race-8f519d7d-0dac-4ec3-a990-a11caa99722e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8f519d7d-0dac-4ec3-a990-a11caa99722e in namespace emptydir-wrapper-8141, will wait for the garbage collector to delete the pods
Feb 11 13:41:28.890: INFO: Deleting ReplicationController wrapped-volume-race-8f519d7d-0dac-4ec3-a990-a11caa99722e took: 31.317772ms
Feb 11 13:41:29.191: INFO: Terminating ReplicationController wrapped-volume-race-8f519d7d-0dac-4ec3-a990-a11caa99722e pods took: 301.363992ms
STEP: Creating RC which spawns configmap-volume pods
Feb 11 13:42:17.039: INFO: Pod name wrapped-volume-race-ecb66f04-cfb1-4595-b83d-faf50a552498: Found 0 pods out of 5
Feb 11 13:42:22.054: INFO: Pod name wrapped-volume-race-ecb66f04-cfb1-4595-b83d-faf50a552498: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ecb66f04-cfb1-4595-b83d-faf50a552498 in namespace emptydir-wrapper-8141, will wait for the garbage collector to delete the pods
Feb 11 13:42:50.163: INFO: Deleting ReplicationController wrapped-volume-race-ecb66f04-cfb1-4595-b83d-faf50a552498 took: 11.050566ms
Feb 11 13:42:50.564: INFO: Terminating ReplicationController wrapped-volume-race-ecb66f04-cfb1-4595-b83d-faf50a552498 pods took: 400.825825ms
STEP: Creating RC which spawns configmap-volume pods
Feb 11 13:43:36.772: INFO: Pod name wrapped-volume-race-c54ab15c-ce06-42ea-b5fb-ce50938fbd83: Found 0 pods out of 5
Feb 11 13:43:41.831: INFO: Pod name wrapped-volume-race-c54ab15c-ce06-42ea-b5fb-ce50938fbd83: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c54ab15c-ce06-42ea-b5fb-ce50938fbd83 in namespace emptydir-wrapper-8141, will wait for the garbage collector to delete the pods
Feb 11 13:44:07.981: INFO: Deleting ReplicationController wrapped-volume-race-c54ab15c-ce06-42ea-b5fb-ce50938fbd83 took: 22.877655ms
Feb 11 13:44:08.482: INFO: Terminating ReplicationController wrapped-volume-race-c54ab15c-ce06-42ea-b5fb-ce50938fbd83 pods took: 500.988779ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:44:53.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8141" for this suite.
Feb 11 13:45:03.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:45:03.840: INFO: namespace emptydir-wrapper-8141 deletion completed in 10.211458761s

• [SLOW TEST:248.861 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:45:03.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 13:45:16.159: INFO: Waiting up to 5m0s for pod "client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069" in namespace "pods-7092" to be "success or failure"
Feb 11 13:45:16.202: INFO: Pod "client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069": Phase="Pending", Reason="", readiness=false. Elapsed: 43.011624ms
Feb 11 13:45:18.219: INFO: Pod "client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060181975s
Feb 11 13:45:20.229: INFO: Pod "client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070611632s
Feb 11 13:45:22.240: INFO: Pod "client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081158527s
Feb 11 13:45:24.257: INFO: Pod "client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098546094s
STEP: Saw pod success
Feb 11 13:45:24.258: INFO: Pod "client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069" satisfied condition "success or failure"
Feb 11 13:45:24.264: INFO: Trying to get logs from node iruya-node pod client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069 container env3cont: 
STEP: delete the pod
Feb 11 13:45:24.372: INFO: Waiting for pod client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069 to disappear
Feb 11 13:45:24.380: INFO: Pod client-envvars-7c9dae3e-0fba-4717-94d8-7f6f50603069 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:45:24.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7092" for this suite.
Feb 11 13:46:26.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:46:26.650: INFO: namespace pods-7092 deletion completed in 1m2.191386152s

• [SLOW TEST:82.809 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:46:26.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0211 13:46:42.037825       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 13:46:42.038: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:46:42.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1326" for this suite.
Feb 11 13:47:00.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:47:00.583: INFO: namespace gc-1326 deletion completed in 18.345111954s

• [SLOW TEST:33.932 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:47:00.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5c1feb13-5f46-407d-822a-69418779d005
STEP: Creating a pod to test consume secrets
Feb 11 13:47:00.903: INFO: Waiting up to 5m0s for pod "pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669" in namespace "secrets-7531" to be "success or failure"
Feb 11 13:47:00.913: INFO: Pod "pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669": Phase="Pending", Reason="", readiness=false. Elapsed: 10.244494ms
Feb 11 13:47:02.921: INFO: Pod "pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018444826s
Feb 11 13:47:05.009: INFO: Pod "pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106465782s
Feb 11 13:47:07.020: INFO: Pod "pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116996586s
Feb 11 13:47:09.031: INFO: Pod "pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127800464s
STEP: Saw pod success
Feb 11 13:47:09.031: INFO: Pod "pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669" satisfied condition "success or failure"
Feb 11 13:47:09.037: INFO: Trying to get logs from node iruya-node pod pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669 container secret-env-test: 
STEP: delete the pod
Feb 11 13:47:09.117: INFO: Waiting for pod pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669 to disappear
Feb 11 13:47:09.122: INFO: Pod pod-secrets-ad6baab6-dbee-4923-879c-b6dab14c2669 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:47:09.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7531" for this suite.
Feb 11 13:47:15.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:47:15.291: INFO: namespace secrets-7531 deletion completed in 6.163190981s

• [SLOW TEST:14.707 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:47:15.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 11 13:50:14.819: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:14.886: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:16.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:16.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:18.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:18.900: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:20.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:20.897: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:22.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:22.898: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:24.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:24.894: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:26.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:26.896: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:28.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:28.902: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:30.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:30.910: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:32.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:32.895: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:34.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:34.899: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:36.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:36.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:38.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:38.897: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:40.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:40.896: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:42.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:42.900: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:44.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:44.898: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:46.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:46.896: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:48.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:48.898: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:50.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:50.900: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:52.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:52.902: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:54.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:54.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:56.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:56.906: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:50:58.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:50:58.911: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:00.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:00.929: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:02.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:02.902: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:04.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:04.900: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:06.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:06.899: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:08.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:08.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:10.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:10.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:12.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:12.899: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:14.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:14.897: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:16.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:16.899: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:18.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:18.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:20.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:20.904: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:22.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:22.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:24.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:24.898: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:26.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:26.905: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:28.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:28.897: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:30.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:30.894: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:32.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:32.899: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:34.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:34.898: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:36.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:36.901: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:38.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:38.899: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:40.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:40.898: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:42.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:42.896: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:44.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:44.897: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 13:51:46.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 13:51:46.894: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:51:46.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4203" for this suite.
Feb 11 13:52:08.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:52:09.055: INFO: namespace container-lifecycle-hook-4203 deletion completed in 22.153559548s

• [SLOW TEST:293.763 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:52:09.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 11 13:52:09.175: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1929" to be "success or failure"
Feb 11 13:52:09.197: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.034452ms
Feb 11 13:52:11.210: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034552398s
Feb 11 13:52:13.220: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044644535s
Feb 11 13:52:15.233: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057982149s
Feb 11 13:52:17.247: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072038271s
Feb 11 13:52:19.269: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093761879s
Feb 11 13:52:21.284: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.108633862s
STEP: Saw pod success
Feb 11 13:52:21.284: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 11 13:52:21.292: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 11 13:52:21.354: INFO: Waiting for pod pod-host-path-test to disappear
Feb 11 13:52:21.408: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:52:21.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1929" for this suite.
Feb 11 13:52:27.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:52:27.575: INFO: namespace hostpath-1929 deletion completed in 6.15727859s

• [SLOW TEST:18.520 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:52:27.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 13:52:27.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:52:35.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-475" for this suite.
Feb 11 13:53:37.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:53:38.051: INFO: namespace pods-475 deletion completed in 1m2.199298047s

• [SLOW TEST:70.475 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:53:38.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 11 13:53:38.201: INFO: Waiting up to 5m0s for pod "downward-api-dccd8008-62ad-4198-a015-bb7451351e1c" in namespace "downward-api-2641" to be "success or failure"
Feb 11 13:53:38.259: INFO: Pod "downward-api-dccd8008-62ad-4198-a015-bb7451351e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.730303ms
Feb 11 13:53:40.272: INFO: Pod "downward-api-dccd8008-62ad-4198-a015-bb7451351e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070893318s
Feb 11 13:53:42.280: INFO: Pod "downward-api-dccd8008-62ad-4198-a015-bb7451351e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078517064s
Feb 11 13:53:44.291: INFO: Pod "downward-api-dccd8008-62ad-4198-a015-bb7451351e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089477857s
Feb 11 13:53:46.303: INFO: Pod "downward-api-dccd8008-62ad-4198-a015-bb7451351e1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102304321s
STEP: Saw pod success
Feb 11 13:53:46.304: INFO: Pod "downward-api-dccd8008-62ad-4198-a015-bb7451351e1c" satisfied condition "success or failure"
Feb 11 13:53:46.309: INFO: Trying to get logs from node iruya-node pod downward-api-dccd8008-62ad-4198-a015-bb7451351e1c container dapi-container: 
STEP: delete the pod
Feb 11 13:53:46.383: INFO: Waiting for pod downward-api-dccd8008-62ad-4198-a015-bb7451351e1c to disappear
Feb 11 13:53:46.438: INFO: Pod downward-api-dccd8008-62ad-4198-a015-bb7451351e1c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:53:46.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2641" for this suite.
Feb 11 13:53:52.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:53:52.642: INFO: namespace downward-api-2641 deletion completed in 6.192886271s

• [SLOW TEST:14.590 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:53:52.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 13:53:52.712: INFO: Creating deployment "test-recreate-deployment"
Feb 11 13:53:52.720: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 11 13:53:52.816: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 11 13:53:54.843: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 11 13:53:54.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 13:53:56.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 13:53:58.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717026032, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 13:54:00.868: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 11 13:54:00.885: INFO: Updating deployment test-recreate-deployment
Feb 11 13:54:00.886: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 11 13:54:01.245: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5738,SelfLink:/apis/apps/v1/namespaces/deployment-5738/deployments/test-recreate-deployment,UID:22daafb8-235c-4e62-bbcb-5b4031352c60,ResourceVersion:23952608,Generation:2,CreationTimestamp:2020-02-11 13:53:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-11 13:54:01 +0000 UTC 2020-02-11 13:54:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-11 13:54:01 +0000 UTC 2020-02-11 13:53:52 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 11 13:54:01.254: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5738,SelfLink:/apis/apps/v1/namespaces/deployment-5738/replicasets/test-recreate-deployment-5c8c9cc69d,UID:84d5d6b5-4151-4a57-8120-6fe50855b459,ResourceVersion:23952607,Generation:1,CreationTimestamp:2020-02-11 13:54:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 22daafb8-235c-4e62-bbcb-5b4031352c60 0xc0019a45e7 0xc0019a45e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 11 13:54:01.254: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 11 13:54:01.254: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5738,SelfLink:/apis/apps/v1/namespaces/deployment-5738/replicasets/test-recreate-deployment-6df85df6b9,UID:04db274d-452b-4665-9e40-8395676b90db,ResourceVersion:23952597,Generation:2,CreationTimestamp:2020-02-11 13:53:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 22daafb8-235c-4e62-bbcb-5b4031352c60 0xc0019a4787 0xc0019a4788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 11 13:54:01.264: INFO: Pod "test-recreate-deployment-5c8c9cc69d-kdsz7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-kdsz7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5738,SelfLink:/api/v1/namespaces/deployment-5738/pods/test-recreate-deployment-5c8c9cc69d-kdsz7,UID:2a695ae2-9bfd-4e05-989e-d7d993f3c2e1,ResourceVersion:23952609,Generation:0,CreationTimestamp:2020-02-11 13:54:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 84d5d6b5-4151-4a57-8120-6fe50855b459 0xc00357a2a7 0xc00357a2a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2cmh8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2cmh8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2cmh8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357a320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357a340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:54:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:54:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:54:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 13:54:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-11 13:54:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:54:01.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5738" for this suite.
Feb 11 13:54:09.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:54:09.697: INFO: namespace deployment-5738 deletion completed in 8.428294303s

• [SLOW TEST:17.055 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:54:09.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 11 13:54:09.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 11 13:54:12.112: INFO: stderr: ""
Feb 11 13:54:12.112: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:54:12.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5882" for this suite.
Feb 11 13:54:18.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:54:18.351: INFO: namespace kubectl-5882 deletion completed in 6.2324473s

• [SLOW TEST:8.653 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:54:18.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-b7a42a28-611d-4c80-86bf-3f6e4d754a8e
STEP: Creating a pod to test consume secrets
Feb 11 13:54:18.542: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83" in namespace "projected-3198" to be "success or failure"
Feb 11 13:54:18.598: INFO: Pod "pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83": Phase="Pending", Reason="", readiness=false. Elapsed: 55.997849ms
Feb 11 13:54:20.620: INFO: Pod "pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077935813s
Feb 11 13:54:22.656: INFO: Pod "pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114321443s
Feb 11 13:54:24.673: INFO: Pod "pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131078919s
Feb 11 13:54:26.678: INFO: Pod "pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136597135s
STEP: Saw pod success
Feb 11 13:54:26.679: INFO: Pod "pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83" satisfied condition "success or failure"
Feb 11 13:54:26.682: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83 container secret-volume-test: 
STEP: delete the pod
Feb 11 13:54:26.757: INFO: Waiting for pod pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83 to disappear
Feb 11 13:54:26.766: INFO: Pod pod-projected-secrets-8b1f1e11-aee2-4c4d-9aca-c922b4551d83 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:54:26.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3198" for this suite.
Feb 11 13:54:32.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:54:32.997: INFO: namespace projected-3198 deletion completed in 6.220026461s

• [SLOW TEST:14.646 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:54:32.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6120
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6120
STEP: Deleting pre-stop pod
Feb 11 13:54:54.268: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:54:54.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6120" for this suite.
Feb 11 13:55:38.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:55:38.541: INFO: namespace prestop-6120 deletion completed in 44.249887472s

• [SLOW TEST:65.544 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:55:38.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 13:55:38.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b" in namespace "projected-7750" to be "success or failure"
Feb 11 13:55:38.696: INFO: Pod "downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 64.018708ms
Feb 11 13:55:40.711: INFO: Pod "downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079590916s
Feb 11 13:55:42.719: INFO: Pod "downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087871502s
Feb 11 13:55:44.726: INFO: Pod "downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094716978s
Feb 11 13:55:46.734: INFO: Pod "downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102397875s
STEP: Saw pod success
Feb 11 13:55:46.734: INFO: Pod "downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b" satisfied condition "success or failure"
Feb 11 13:55:46.738: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b container client-container: 
STEP: delete the pod
Feb 11 13:55:46.786: INFO: Waiting for pod downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b to disappear
Feb 11 13:55:46.798: INFO: Pod downwardapi-volume-d1202c7f-61af-4f86-927a-2c8c28d60f2b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:55:46.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7750" for this suite.
Feb 11 13:55:52.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:55:53.076: INFO: namespace projected-7750 deletion completed in 6.273047425s

• [SLOW TEST:14.534 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:55:53.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 13:55:53.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a" in namespace "downward-api-7290" to be "success or failure"
Feb 11 13:55:53.311: INFO: Pod "downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.226105ms
Feb 11 13:55:55.319: INFO: Pod "downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028140925s
Feb 11 13:55:57.336: INFO: Pod "downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045370831s
Feb 11 13:55:59.405: INFO: Pod "downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113572639s
Feb 11 13:56:01.412: INFO: Pod "downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121019978s
STEP: Saw pod success
Feb 11 13:56:01.412: INFO: Pod "downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a" satisfied condition "success or failure"
Feb 11 13:56:01.415: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a container client-container: 
STEP: delete the pod
Feb 11 13:56:01.452: INFO: Waiting for pod downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a to disappear
Feb 11 13:56:01.456: INFO: Pod downwardapi-volume-4680e7ea-a59f-478e-95d9-65caf4fec95a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:56:01.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7290" for this suite.
Feb 11 13:56:07.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:56:07.698: INFO: namespace downward-api-7290 deletion completed in 6.182331679s

• [SLOW TEST:14.621 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:56:07.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 11 13:56:07.783: INFO: Waiting up to 5m0s for pod "pod-8256fcd3-59e5-4eac-b377-807bc7d137e0" in namespace "emptydir-2532" to be "success or failure"
Feb 11 13:56:07.834: INFO: Pod "pod-8256fcd3-59e5-4eac-b377-807bc7d137e0": Phase="Pending", Reason="", readiness=false. Elapsed: 51.077234ms
Feb 11 13:56:09.845: INFO: Pod "pod-8256fcd3-59e5-4eac-b377-807bc7d137e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061808259s
Feb 11 13:56:11.859: INFO: Pod "pod-8256fcd3-59e5-4eac-b377-807bc7d137e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075259982s
Feb 11 13:56:13.879: INFO: Pod "pod-8256fcd3-59e5-4eac-b377-807bc7d137e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096241776s
Feb 11 13:56:15.895: INFO: Pod "pod-8256fcd3-59e5-4eac-b377-807bc7d137e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111327601s
STEP: Saw pod success
Feb 11 13:56:15.895: INFO: Pod "pod-8256fcd3-59e5-4eac-b377-807bc7d137e0" satisfied condition "success or failure"
Feb 11 13:56:15.903: INFO: Trying to get logs from node iruya-node pod pod-8256fcd3-59e5-4eac-b377-807bc7d137e0 container test-container: 
STEP: delete the pod
Feb 11 13:56:15.985: INFO: Waiting for pod pod-8256fcd3-59e5-4eac-b377-807bc7d137e0 to disappear
Feb 11 13:56:15.993: INFO: Pod pod-8256fcd3-59e5-4eac-b377-807bc7d137e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:56:15.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2532" for this suite.
Feb 11 13:56:22.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:56:22.165: INFO: namespace emptydir-2532 deletion completed in 6.164287123s

• [SLOW TEST:14.467 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:56:22.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 11 13:56:31.093: INFO: Successfully updated pod "annotationupdateb4d2caf4-c186-422a-a571-e149a58b531d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:56:33.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1188" for this suite.
Feb 11 13:56:55.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:56:55.365: INFO: namespace downward-api-1188 deletion completed in 22.170310173s

• [SLOW TEST:33.198 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:56:55.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 11 13:57:04.210: INFO: Successfully updated pod "annotationupdate9dfae755-47f4-46ad-979c-d21e0079e99c"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:57:06.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9814" for this suite.
Feb 11 13:57:28.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:57:28.521: INFO: namespace projected-9814 deletion completed in 22.212427334s

• [SLOW TEST:33.156 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:57:28.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:57:38.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3241" for this suite.
Feb 11 13:58:20.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:58:20.883: INFO: namespace kubelet-test-3241 deletion completed in 42.152293867s

• [SLOW TEST:52.360 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:58:20.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 11 13:58:20.973: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:58:38.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7910" for this suite.
Feb 11 13:59:00.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:59:00.743: INFO: namespace init-container-7910 deletion completed in 22.172744533s

• [SLOW TEST:39.860 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:59:00.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 11 13:59:10.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-53789cb4-3a51-4e3c-944d-daa431e5c87f -c busybox-main-container --namespace=emptydir-1892 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 11 13:59:11.479: INFO: stderr: "I0211 13:59:11.218983     987 log.go:172] (0xc0008e80b0) (0xc0008ca8c0) Create stream\nI0211 13:59:11.219180     987 log.go:172] (0xc0008e80b0) (0xc0008ca8c0) Stream added, broadcasting: 1\nI0211 13:59:11.226031     987 log.go:172] (0xc0008e80b0) Reply frame received for 1\nI0211 13:59:11.226166     987 log.go:172] (0xc0008e80b0) (0xc000672280) Create stream\nI0211 13:59:11.226187     987 log.go:172] (0xc0008e80b0) (0xc000672280) Stream added, broadcasting: 3\nI0211 13:59:11.228350     987 log.go:172] (0xc0008e80b0) Reply frame received for 3\nI0211 13:59:11.228419     987 log.go:172] (0xc0008e80b0) (0xc00037a000) Create stream\nI0211 13:59:11.228431     987 log.go:172] (0xc0008e80b0) (0xc00037a000) Stream added, broadcasting: 5\nI0211 13:59:11.230744     987 log.go:172] (0xc0008e80b0) Reply frame received for 5\nI0211 13:59:11.330754     987 log.go:172] (0xc0008e80b0) Data frame received for 3\nI0211 13:59:11.330827     987 log.go:172] (0xc000672280) (3) Data frame handling\nI0211 13:59:11.330858     987 log.go:172] (0xc000672280) (3) Data frame sent\nI0211 13:59:11.463174     987 log.go:172] (0xc0008e80b0) (0xc000672280) Stream removed, broadcasting: 3\nI0211 13:59:11.463582     987 log.go:172] (0xc0008e80b0) Data frame received for 1\nI0211 13:59:11.463610     987 log.go:172] (0xc0008ca8c0) (1) Data frame handling\nI0211 13:59:11.463657     987 log.go:172] (0xc0008ca8c0) (1) Data frame sent\nI0211 13:59:11.463696     987 log.go:172] (0xc0008e80b0) (0xc0008ca8c0) Stream removed, broadcasting: 1\nI0211 13:59:11.464015     987 log.go:172] (0xc0008e80b0) (0xc00037a000) Stream removed, broadcasting: 5\nI0211 13:59:11.464216     987 log.go:172] (0xc0008e80b0) Go away received\nI0211 13:59:11.465719     987 log.go:172] (0xc0008e80b0) (0xc0008ca8c0) Stream removed, broadcasting: 1\nI0211 13:59:11.465834     987 log.go:172] (0xc0008e80b0) (0xc000672280) Stream removed, broadcasting: 3\nI0211 13:59:11.465892     987 log.go:172] (0xc0008e80b0) (0xc00037a000) Stream removed, broadcasting: 5\n"
Feb 11 13:59:11.480: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:59:11.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1892" for this suite.
Feb 11 13:59:17.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:59:17.608: INFO: namespace emptydir-1892 deletion completed in 6.121675002s

• [SLOW TEST:16.864 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:59:17.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:59:17.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5800" for this suite.
Feb 11 13:59:23.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:59:23.911: INFO: namespace services-5800 deletion completed in 6.233562998s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.303 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:59:23.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 13:59:23.986: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5" in namespace "downward-api-8031" to be "success or failure"
Feb 11 13:59:24.018: INFO: Pod "downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.799221ms
Feb 11 13:59:26.034: INFO: Pod "downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047230711s
Feb 11 13:59:28.047: INFO: Pod "downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060257502s
Feb 11 13:59:30.065: INFO: Pod "downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078299685s
Feb 11 13:59:32.075: INFO: Pod "downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088601223s
STEP: Saw pod success
Feb 11 13:59:32.076: INFO: Pod "downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5" satisfied condition "success or failure"
Feb 11 13:59:32.080: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5 container client-container: 
STEP: delete the pod
Feb 11 13:59:32.144: INFO: Waiting for pod downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5 to disappear
Feb 11 13:59:32.212: INFO: Pod downwardapi-volume-c42909f8-05de-4456-98e1-166157476eb5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:59:32.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8031" for this suite.
Feb 11 13:59:38.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:59:38.426: INFO: namespace downward-api-8031 deletion completed in 6.200190543s

• [SLOW TEST:14.514 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:59:38.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0211 13:59:41.834353       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 13:59:41.834: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:59:41.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7089" for this suite.
Feb 11 13:59:48.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 13:59:48.109: INFO: namespace gc-7089 deletion completed in 6.262164462s

• [SLOW TEST:9.680 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 13:59:48.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-cc64e5cc-47b0-4db3-9bd3-a07cb5805cf2
STEP: Creating a pod to test consume configMaps
Feb 11 13:59:48.302: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272" in namespace "configmap-6497" to be "success or failure"
Feb 11 13:59:48.338: INFO: Pod "pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272": Phase="Pending", Reason="", readiness=false. Elapsed: 35.300043ms
Feb 11 13:59:50.350: INFO: Pod "pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047501471s
Feb 11 13:59:52.410: INFO: Pod "pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108048524s
Feb 11 13:59:54.436: INFO: Pod "pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133894993s
Feb 11 13:59:56.445: INFO: Pod "pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142573672s
STEP: Saw pod success
Feb 11 13:59:56.445: INFO: Pod "pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272" satisfied condition "success or failure"
Feb 11 13:59:56.449: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272 container configmap-volume-test: 
STEP: delete the pod
Feb 11 13:59:56.543: INFO: Waiting for pod pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272 to disappear
Feb 11 13:59:56.555: INFO: Pod pod-configmaps-c4016b1e-956f-49dc-89ee-6b266baac272 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 13:59:56.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6497" for this suite.
Feb 11 14:00:02.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:00:02.741: INFO: namespace configmap-6497 deletion completed in 6.175912066s

• [SLOW TEST:14.632 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:00:02.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 11 14:00:02.868: INFO: Waiting up to 5m0s for pod "client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f" in namespace "containers-9990" to be "success or failure"
Feb 11 14:00:02.880: INFO: Pod "client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.82964ms
Feb 11 14:00:04.893: INFO: Pod "client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024089628s
Feb 11 14:00:06.905: INFO: Pod "client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035975226s
Feb 11 14:00:08.914: INFO: Pod "client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045042607s
Feb 11 14:00:10.929: INFO: Pod "client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059881044s
STEP: Saw pod success
Feb 11 14:00:10.929: INFO: Pod "client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f" satisfied condition "success or failure"
Feb 11 14:00:10.934: INFO: Trying to get logs from node iruya-node pod client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f container test-container: 
STEP: delete the pod
Feb 11 14:00:11.009: INFO: Waiting for pod client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f to disappear
Feb 11 14:00:11.021: INFO: Pod client-containers-1bb064c7-f419-47f6-ba60-f93154b1ac8f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:00:11.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9990" for this suite.
Feb 11 14:00:17.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:00:17.242: INFO: namespace containers-9990 deletion completed in 6.195193797s

• [SLOW TEST:14.500 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:00:17.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:00:25.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-980" for this suite.
Feb 11 14:01:07.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:01:07.680: INFO: namespace kubelet-test-980 deletion completed in 42.228272411s

• [SLOW TEST:50.438 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:01:07.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:01:07.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6490" for this suite.
Feb 11 14:01:13.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:01:14.049: INFO: namespace kubelet-test-6490 deletion completed in 6.149344933s

• [SLOW TEST:6.369 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:01:14.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:01:20.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7839" for this suite.
Feb 11 14:01:26.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:01:26.674: INFO: namespace namespaces-7839 deletion completed in 6.171241267s
STEP: Destroying namespace "nsdeletetest-1191" for this suite.
Feb 11 14:01:26.677: INFO: Namespace nsdeletetest-1191 was already deleted
STEP: Destroying namespace "nsdeletetest-5343" for this suite.
Feb 11 14:01:32.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:01:32.809: INFO: namespace nsdeletetest-5343 deletion completed in 6.131591383s

• [SLOW TEST:18.758 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:01:32.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f12d73ba-ed26-48cc-9e2b-1d8f895bf5d0
STEP: Creating a pod to test consume configMaps
Feb 11 14:01:33.102: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1" in namespace "projected-5672" to be "success or failure"
Feb 11 14:01:33.126: INFO: Pod "pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.285991ms
Feb 11 14:01:35.135: INFO: Pod "pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032761758s
Feb 11 14:01:37.144: INFO: Pod "pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042445798s
Feb 11 14:01:39.201: INFO: Pod "pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098682159s
Feb 11 14:01:41.212: INFO: Pod "pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109650208s
STEP: Saw pod success
Feb 11 14:01:41.212: INFO: Pod "pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1" satisfied condition "success or failure"
Feb 11 14:01:41.215: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 14:01:41.300: INFO: Waiting for pod pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1 to disappear
Feb 11 14:01:41.306: INFO: Pod pod-projected-configmaps-08a997c9-a366-4493-9b0b-795968ae8ce1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:01:41.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5672" for this suite.
Feb 11 14:01:47.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:01:47.557: INFO: namespace projected-5672 deletion completed in 6.245316072s

• [SLOW TEST:14.748 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:01:47.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 11 14:01:47.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5346'
Feb 11 14:01:47.849: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 11 14:01:47.849: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 11 14:01:47.879: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 11 14:01:47.910: INFO: scanned /root for discovery docs: 
Feb 11 14:01:47.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5346'
Feb 11 14:02:10.893: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 11 14:02:10.893: INFO: stdout: "Created e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc\nScaling up e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 11 14:02:10.893: INFO: stdout: "Created e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc\nScaling up e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 11 14:02:10.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5346'
Feb 11 14:02:11.059: INFO: stderr: ""
Feb 11 14:02:11.059: INFO: stdout: "e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc-6tbnq "
Feb 11 14:02:11.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc-6tbnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5346'
Feb 11 14:02:11.189: INFO: stderr: ""
Feb 11 14:02:11.189: INFO: stdout: "true"
Feb 11 14:02:11.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc-6tbnq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5346'
Feb 11 14:02:11.281: INFO: stderr: ""
Feb 11 14:02:11.281: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 11 14:02:11.281: INFO: e2e-test-nginx-rc-cd797ea1477f57faee3a233a8ee1acfc-6tbnq is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 11 14:02:11.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5346'
Feb 11 14:02:11.378: INFO: stderr: ""
Feb 11 14:02:11.378: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:02:11.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5346" for this suite.
Feb 11 14:02:33.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:02:33.677: INFO: namespace kubectl-5346 deletion completed in 22.252613885s

• [SLOW TEST:46.119 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:02:33.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 11 14:02:33.818: INFO: Waiting up to 5m0s for pod "pod-ae417a98-8365-4717-a1f4-c05f257e40d0" in namespace "emptydir-9506" to be "success or failure"
Feb 11 14:02:33.851: INFO: Pod "pod-ae417a98-8365-4717-a1f4-c05f257e40d0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.523551ms
Feb 11 14:02:35.871: INFO: Pod "pod-ae417a98-8365-4717-a1f4-c05f257e40d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052890754s
Feb 11 14:02:37.881: INFO: Pod "pod-ae417a98-8365-4717-a1f4-c05f257e40d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063101177s
Feb 11 14:02:39.898: INFO: Pod "pod-ae417a98-8365-4717-a1f4-c05f257e40d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079846553s
Feb 11 14:02:41.921: INFO: Pod "pod-ae417a98-8365-4717-a1f4-c05f257e40d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103391754s
STEP: Saw pod success
Feb 11 14:02:41.922: INFO: Pod "pod-ae417a98-8365-4717-a1f4-c05f257e40d0" satisfied condition "success or failure"
Feb 11 14:02:41.933: INFO: Trying to get logs from node iruya-node pod pod-ae417a98-8365-4717-a1f4-c05f257e40d0 container test-container: 
STEP: delete the pod
Feb 11 14:02:42.078: INFO: Waiting for pod pod-ae417a98-8365-4717-a1f4-c05f257e40d0 to disappear
Feb 11 14:02:42.087: INFO: Pod pod-ae417a98-8365-4717-a1f4-c05f257e40d0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:02:42.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9506" for this suite.
Feb 11 14:02:48.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:02:48.283: INFO: namespace emptydir-9506 deletion completed in 6.18975753s

• [SLOW TEST:14.604 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:02:48.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4658, will wait for the garbage collector to delete the pods
Feb 11 14:02:58.586: INFO: Deleting Job.batch foo took: 12.534124ms
Feb 11 14:02:58.887: INFO: Terminating Job.batch foo pods took: 300.935404ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:03:46.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4658" for this suite.
Feb 11 14:03:52.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:03:53.012: INFO: namespace job-4658 deletion completed in 6.184723569s

• [SLOW TEST:64.728 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:03:53.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9576
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 14:03:53.092: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 11 14:04:27.370: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-9576 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 14:04:27.370: INFO: >>> kubeConfig: /root/.kube/config
I0211 14:04:27.454044       9 log.go:172] (0xc000aa8790) (0xc0022d1860) Create stream
I0211 14:04:27.454129       9 log.go:172] (0xc000aa8790) (0xc0022d1860) Stream added, broadcasting: 1
I0211 14:04:27.464658       9 log.go:172] (0xc000aa8790) Reply frame received for 1
I0211 14:04:27.464851       9 log.go:172] (0xc000aa8790) (0xc000fc40a0) Create stream
I0211 14:04:27.464890       9 log.go:172] (0xc000aa8790) (0xc000fc40a0) Stream added, broadcasting: 3
I0211 14:04:27.467710       9 log.go:172] (0xc000aa8790) Reply frame received for 3
I0211 14:04:27.467743       9 log.go:172] (0xc000aa8790) (0xc0022d1900) Create stream
I0211 14:04:27.467756       9 log.go:172] (0xc000aa8790) (0xc0022d1900) Stream added, broadcasting: 5
I0211 14:04:27.469941       9 log.go:172] (0xc000aa8790) Reply frame received for 5
I0211 14:04:27.709715       9 log.go:172] (0xc000aa8790) Data frame received for 3
I0211 14:04:27.709870       9 log.go:172] (0xc000fc40a0) (3) Data frame handling
I0211 14:04:27.709922       9 log.go:172] (0xc000fc40a0) (3) Data frame sent
I0211 14:04:27.891882       9 log.go:172] (0xc000aa8790) (0xc0022d1900) Stream removed, broadcasting: 5
I0211 14:04:27.892095       9 log.go:172] (0xc000aa8790) Data frame received for 1
I0211 14:04:27.892128       9 log.go:172] (0xc000aa8790) (0xc000fc40a0) Stream removed, broadcasting: 3
I0211 14:04:27.892196       9 log.go:172] (0xc0022d1860) (1) Data frame handling
I0211 14:04:27.892218       9 log.go:172] (0xc0022d1860) (1) Data frame sent
I0211 14:04:27.892234       9 log.go:172] (0xc000aa8790) (0xc0022d1860) Stream removed, broadcasting: 1
I0211 14:04:27.892249       9 log.go:172] (0xc000aa8790) Go away received
I0211 14:04:27.892927       9 log.go:172] (0xc000aa8790) (0xc0022d1860) Stream removed, broadcasting: 1
I0211 14:04:27.893074       9 log.go:172] (0xc000aa8790) (0xc000fc40a0) Stream removed, broadcasting: 3
I0211 14:04:27.893088       9 log.go:172] (0xc000aa8790) (0xc0022d1900) Stream removed, broadcasting: 5
Feb 11 14:04:27.893: INFO: Waiting for endpoints: map[]
Feb 11 14:04:27.900: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9576 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 14:04:27.900: INFO: >>> kubeConfig: /root/.kube/config
I0211 14:04:27.949583       9 log.go:172] (0xc000bd26e0) (0xc000fc4c80) Create stream
I0211 14:04:27.949655       9 log.go:172] (0xc000bd26e0) (0xc000fc4c80) Stream added, broadcasting: 1
I0211 14:04:27.956306       9 log.go:172] (0xc000bd26e0) Reply frame received for 1
I0211 14:04:27.956426       9 log.go:172] (0xc000bd26e0) (0xc000fc5180) Create stream
I0211 14:04:27.956437       9 log.go:172] (0xc000bd26e0) (0xc000fc5180) Stream added, broadcasting: 3
I0211 14:04:27.957808       9 log.go:172] (0xc000bd26e0) Reply frame received for 3
I0211 14:04:27.957837       9 log.go:172] (0xc000bd26e0) (0xc0010c9cc0) Create stream
I0211 14:04:27.957851       9 log.go:172] (0xc000bd26e0) (0xc0010c9cc0) Stream added, broadcasting: 5
I0211 14:04:27.959705       9 log.go:172] (0xc000bd26e0) Reply frame received for 5
I0211 14:04:28.104149       9 log.go:172] (0xc000bd26e0) Data frame received for 3
I0211 14:04:28.104245       9 log.go:172] (0xc000fc5180) (3) Data frame handling
I0211 14:04:28.104291       9 log.go:172] (0xc000fc5180) (3) Data frame sent
I0211 14:04:28.255718       9 log.go:172] (0xc000bd26e0) Data frame received for 1
I0211 14:04:28.255902       9 log.go:172] (0xc000fc4c80) (1) Data frame handling
I0211 14:04:28.255951       9 log.go:172] (0xc000fc4c80) (1) Data frame sent
I0211 14:04:28.256165       9 log.go:172] (0xc000bd26e0) (0xc0010c9cc0) Stream removed, broadcasting: 5
I0211 14:04:28.256496       9 log.go:172] (0xc000bd26e0) (0xc000fc5180) Stream removed, broadcasting: 3
I0211 14:04:28.256620       9 log.go:172] (0xc000bd26e0) (0xc000fc4c80) Stream removed, broadcasting: 1
I0211 14:04:28.256651       9 log.go:172] (0xc000bd26e0) Go away received
I0211 14:04:28.256863       9 log.go:172] (0xc000bd26e0) (0xc000fc4c80) Stream removed, broadcasting: 1
I0211 14:04:28.256900       9 log.go:172] (0xc000bd26e0) (0xc000fc5180) Stream removed, broadcasting: 3
I0211 14:04:28.256911       9 log.go:172] (0xc000bd26e0) (0xc0010c9cc0) Stream removed, broadcasting: 5
Feb 11 14:04:28.257: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:04:28.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9576" for this suite.
Feb 11 14:04:52.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:04:52.405: INFO: namespace pod-network-test-9576 deletion completed in 24.137864532s

• [SLOW TEST:59.393 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:04:52.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9386
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9386
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9386
Feb 11 14:04:52.629: INFO: Found 0 stateful pods, waiting for 1
Feb 11 14:05:02.644: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 11 14:05:02.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:05:05.283: INFO: stderr: "I0211 14:05:04.845436    1125 log.go:172] (0xc000be0420) (0xc000bda780) Create stream\nI0211 14:05:04.845706    1125 log.go:172] (0xc000be0420) (0xc000bda780) Stream added, broadcasting: 1\nI0211 14:05:04.867694    1125 log.go:172] (0xc000be0420) Reply frame received for 1\nI0211 14:05:04.867858    1125 log.go:172] (0xc000be0420) (0xc0003afa40) Create stream\nI0211 14:05:04.867882    1125 log.go:172] (0xc000be0420) (0xc0003afa40) Stream added, broadcasting: 3\nI0211 14:05:04.869928    1125 log.go:172] (0xc000be0420) Reply frame received for 3\nI0211 14:05:04.869999    1125 log.go:172] (0xc000be0420) (0xc000bda000) Create stream\nI0211 14:05:04.870019    1125 log.go:172] (0xc000be0420) (0xc000bda000) Stream added, broadcasting: 5\nI0211 14:05:04.872240    1125 log.go:172] (0xc000be0420) Reply frame received for 5\nI0211 14:05:05.076002    1125 log.go:172] (0xc000be0420) Data frame received for 5\nI0211 14:05:05.076105    1125 log.go:172] (0xc000bda000) (5) Data frame handling\nI0211 14:05:05.076160    1125 log.go:172] (0xc000bda000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:05:05.137533    1125 log.go:172] (0xc000be0420) Data frame received for 3\nI0211 14:05:05.137662    1125 log.go:172] (0xc0003afa40) (3) Data frame handling\nI0211 14:05:05.137730    1125 log.go:172] (0xc0003afa40) (3) Data frame sent\nI0211 14:05:05.269622    1125 log.go:172] (0xc000be0420) Data frame received for 1\nI0211 14:05:05.269740    1125 log.go:172] (0xc000be0420) (0xc0003afa40) Stream removed, broadcasting: 3\nI0211 14:05:05.269814    1125 log.go:172] (0xc000bda780) (1) Data frame handling\nI0211 14:05:05.269847    1125 log.go:172] (0xc000bda780) (1) Data frame sent\nI0211 14:05:05.269986    1125 log.go:172] (0xc000be0420) (0xc000bda000) Stream removed, broadcasting: 5\nI0211 14:05:05.270030    1125 log.go:172] (0xc000be0420) (0xc000bda780) Stream removed, broadcasting: 1\nI0211 14:05:05.270063    1125 log.go:172] (0xc000be0420) Go away received\nI0211 14:05:05.271456    1125 log.go:172] (0xc000be0420) (0xc000bda780) Stream removed, broadcasting: 1\nI0211 14:05:05.271477    1125 log.go:172] (0xc000be0420) (0xc0003afa40) Stream removed, broadcasting: 3\nI0211 14:05:05.271497    1125 log.go:172] (0xc000be0420) (0xc000bda000) Stream removed, broadcasting: 5\n"
Feb 11 14:05:05.283: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:05:05.283: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:05:05.291: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 11 14:05:15.300: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:05:15.300: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:05:15.323: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999996951s
Feb 11 14:05:16.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992311265s
Feb 11 14:05:17.353: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977364176s
Feb 11 14:05:18.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.961862338s
Feb 11 14:05:19.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.945501983s
Feb 11 14:05:20.389: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.934164347s
Feb 11 14:05:21.397: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.925683888s
Feb 11 14:05:22.406: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.917670723s
Feb 11 14:05:23.415: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.908722834s
Feb 11 14:05:24.424: INFO: Verifying statefulset ss doesn't scale past 1 for another 899.75986ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9386
Feb 11 14:05:25.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:05:26.231: INFO: stderr: "I0211 14:05:25.803516    1156 log.go:172] (0xc000938370) (0xc0008f26e0) Create stream\nI0211 14:05:25.804046    1156 log.go:172] (0xc000938370) (0xc0008f26e0) Stream added, broadcasting: 1\nI0211 14:05:25.830198    1156 log.go:172] (0xc000938370) Reply frame received for 1\nI0211 14:05:25.830644    1156 log.go:172] (0xc000938370) (0xc0005f6280) Create stream\nI0211 14:05:25.830743    1156 log.go:172] (0xc000938370) (0xc0005f6280) Stream added, broadcasting: 3\nI0211 14:05:25.834980    1156 log.go:172] (0xc000938370) Reply frame received for 3\nI0211 14:05:25.835063    1156 log.go:172] (0xc000938370) (0xc0008f2780) Create stream\nI0211 14:05:25.835133    1156 log.go:172] (0xc000938370) (0xc0008f2780) Stream added, broadcasting: 5\nI0211 14:05:25.838217    1156 log.go:172] (0xc000938370) Reply frame received for 5\nI0211 14:05:26.000505    1156 log.go:172] (0xc000938370) Data frame received for 5\nI0211 14:05:26.000749    1156 log.go:172] (0xc0008f2780) (5) Data frame handling\nI0211 14:05:26.000804    1156 log.go:172] (0xc0008f2780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0211 14:05:26.003456    1156 log.go:172] (0xc000938370) Data frame received for 3\nI0211 14:05:26.003573    1156 log.go:172] (0xc0005f6280) (3) Data frame handling\nI0211 14:05:26.003635    1156 log.go:172] (0xc0005f6280) (3) Data frame sent\nI0211 14:05:26.207883    1156 log.go:172] (0xc000938370) Data frame received for 1\nI0211 14:05:26.208209    1156 log.go:172] (0xc000938370) (0xc0005f6280) Stream removed, broadcasting: 3\nI0211 14:05:26.208470    1156 log.go:172] (0xc0008f26e0) (1) Data frame handling\nI0211 14:05:26.208659    1156 log.go:172] (0xc0008f26e0) (1) Data frame sent\nI0211 14:05:26.208728    1156 log.go:172] (0xc000938370) (0xc0008f2780) Stream removed, broadcasting: 5\nI0211 14:05:26.208824    1156 log.go:172] (0xc000938370) (0xc0008f26e0) Stream removed, broadcasting: 1\nI0211 14:05:26.208860    1156 log.go:172] (0xc000938370) Go away received\nI0211 14:05:26.210185    1156 log.go:172] (0xc000938370) (0xc0008f26e0) Stream removed, broadcasting: 1\nI0211 14:05:26.210234    1156 log.go:172] (0xc000938370) (0xc0005f6280) Stream removed, broadcasting: 3\nI0211 14:05:26.210254    1156 log.go:172] (0xc000938370) (0xc0008f2780) Stream removed, broadcasting: 5\n"
Feb 11 14:05:26.232: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 14:05:26.232: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 14:05:26.248: INFO: Found 2 stateful pods, waiting for 3
Feb 11 14:05:36.266: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:05:36.266: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:05:36.266: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 11 14:05:46.259: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:05:46.259: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:05:46.259: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 11 14:05:46.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:05:47.128: INFO: stderr: "I0211 14:05:46.633535    1177 log.go:172] (0xc000a32370) (0xc00032e6e0) Create stream\nI0211 14:05:46.634069    1177 log.go:172] (0xc000a32370) (0xc00032e6e0) Stream added, broadcasting: 1\nI0211 14:05:46.643478    1177 log.go:172] (0xc000a32370) Reply frame received for 1\nI0211 14:05:46.644183    1177 log.go:172] (0xc000a32370) (0xc000a78000) Create stream\nI0211 14:05:46.644362    1177 log.go:172] (0xc000a32370) (0xc000a78000) Stream added, broadcasting: 3\nI0211 14:05:46.652389    1177 log.go:172] (0xc000a32370) Reply frame received for 3\nI0211 14:05:46.652480    1177 log.go:172] (0xc000a32370) (0xc000a780a0) Create stream\nI0211 14:05:46.652504    1177 log.go:172] (0xc000a32370) (0xc000a780a0) Stream added, broadcasting: 5\nI0211 14:05:46.653943    1177 log.go:172] (0xc000a32370) Reply frame received for 5\nI0211 14:05:46.909421    1177 log.go:172] (0xc000a32370) Data frame received for 5\nI0211 14:05:46.909658    1177 log.go:172] (0xc000a780a0) (5) Data frame handling\nI0211 14:05:46.909739    1177 log.go:172] (0xc000a780a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:05:46.910112    1177 log.go:172] (0xc000a32370) Data frame received for 3\nI0211 14:05:46.910212    1177 log.go:172] (0xc000a78000) (3) Data frame handling\nI0211 14:05:46.910277    1177 log.go:172] (0xc000a78000) (3) Data frame sent\nI0211 14:05:47.108557    1177 log.go:172] (0xc000a32370) Data frame received for 1\nI0211 14:05:47.108691    1177 log.go:172] (0xc000a32370) (0xc000a78000) Stream removed, broadcasting: 3\nI0211 14:05:47.108747    1177 log.go:172] (0xc00032e6e0) (1) Data frame handling\nI0211 14:05:47.108782    1177 log.go:172] (0xc00032e6e0) (1) Data frame sent\nI0211 14:05:47.108792    1177 log.go:172] (0xc000a32370) (0xc000a780a0) Stream removed, broadcasting: 5\nI0211 14:05:47.109030    1177 log.go:172] (0xc000a32370) (0xc00032e6e0) Stream removed, broadcasting: 1\nI0211 14:05:47.109068    1177 log.go:172] (0xc000a32370) Go away received\nI0211 14:05:47.110135    1177 log.go:172] (0xc000a32370) (0xc00032e6e0) Stream removed, broadcasting: 1\nI0211 14:05:47.110156    1177 log.go:172] (0xc000a32370) (0xc000a78000) Stream removed, broadcasting: 3\nI0211 14:05:47.110173    1177 log.go:172] (0xc000a32370) (0xc000a780a0) Stream removed, broadcasting: 5\n"
Feb 11 14:05:47.129: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:05:47.129: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:05:47.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:05:47.613: INFO: stderr: "I0211 14:05:47.301987    1197 log.go:172] (0xc000116d10) (0xc000682780) Create stream\nI0211 14:05:47.302259    1197 log.go:172] (0xc000116d10) (0xc000682780) Stream added, broadcasting: 1\nI0211 14:05:47.305064    1197 log.go:172] (0xc000116d10) Reply frame received for 1\nI0211 14:05:47.305107    1197 log.go:172] (0xc000116d10) (0xc000682820) Create stream\nI0211 14:05:47.305115    1197 log.go:172] (0xc000116d10) (0xc000682820) Stream added, broadcasting: 3\nI0211 14:05:47.305972    1197 log.go:172] (0xc000116d10) Reply frame received for 3\nI0211 14:05:47.305992    1197 log.go:172] (0xc000116d10) (0xc0007ba000) Create stream\nI0211 14:05:47.306002    1197 log.go:172] (0xc000116d10) (0xc0007ba000) Stream added, broadcasting: 5\nI0211 14:05:47.306970    1197 log.go:172] (0xc000116d10) Reply frame received for 5\nI0211 14:05:47.431064    1197 log.go:172] (0xc000116d10) Data frame received for 5\nI0211 14:05:47.431112    1197 log.go:172] (0xc0007ba000) (5) Data frame handling\nI0211 14:05:47.431133    1197 log.go:172] (0xc0007ba000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:05:47.511505    1197 log.go:172] (0xc000116d10) Data frame received for 3\nI0211 14:05:47.511709    1197 log.go:172] (0xc000682820) (3) Data frame handling\nI0211 14:05:47.511744    1197 log.go:172] (0xc000682820) (3) Data frame sent\nI0211 14:05:47.600697    1197 log.go:172] (0xc000116d10) Data frame received for 1\nI0211 14:05:47.600748    1197 log.go:172] (0xc000682780) (1) Data frame handling\nI0211 14:05:47.600766    1197 log.go:172] (0xc000682780) (1) Data frame sent\nI0211 14:05:47.601276    1197 log.go:172] (0xc000116d10) (0xc000682780) Stream removed, broadcasting: 1\nI0211 14:05:47.601466    1197 log.go:172] (0xc000116d10) (0xc000682820) Stream removed, broadcasting: 3\nI0211 14:05:47.601773    1197 log.go:172] (0xc000116d10) (0xc0007ba000) Stream removed, broadcasting: 5\nI0211 14:05:47.602215    1197 log.go:172] (0xc000116d10) Go away received\nI0211 14:05:47.603369    1197 log.go:172] (0xc000116d10) (0xc000682780) Stream removed, broadcasting: 1\nI0211 14:05:47.603398    1197 log.go:172] (0xc000116d10) (0xc000682820) Stream removed, broadcasting: 3\nI0211 14:05:47.603405    1197 log.go:172] (0xc000116d10) (0xc0007ba000) Stream removed, broadcasting: 5\n"
Feb 11 14:05:47.613: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:05:47.613: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:05:47.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:05:48.470: INFO: stderr: "I0211 14:05:47.936471    1214 log.go:172] (0xc000138fd0) (0xc000600b40) Create stream\nI0211 14:05:47.936737    1214 log.go:172] (0xc000138fd0) (0xc000600b40) Stream added, broadcasting: 1\nI0211 14:05:47.947244    1214 log.go:172] (0xc000138fd0) Reply frame received for 1\nI0211 14:05:47.947474    1214 log.go:172] (0xc000138fd0) (0xc000600be0) Create stream\nI0211 14:05:47.947500    1214 log.go:172] (0xc000138fd0) (0xc000600be0) Stream added, broadcasting: 3\nI0211 14:05:47.950000    1214 log.go:172] (0xc000138fd0) Reply frame received for 3\nI0211 14:05:47.950131    1214 log.go:172] (0xc000138fd0) (0xc0003a4140) Create stream\nI0211 14:05:47.950151    1214 log.go:172] (0xc000138fd0) (0xc0003a4140) Stream added, broadcasting: 5\nI0211 14:05:47.951844    1214 log.go:172] (0xc000138fd0) Reply frame received for 5\nI0211 14:05:48.138395    1214 log.go:172] (0xc000138fd0) Data frame received for 5\nI0211 14:05:48.138687    1214 log.go:172] (0xc0003a4140) (5) Data frame handling\nI0211 14:05:48.138792    1214 log.go:172] (0xc0003a4140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:05:48.182662    1214 log.go:172] (0xc000138fd0) Data frame received for 3\nI0211 14:05:48.182844    1214 log.go:172] (0xc000600be0) (3) Data frame handling\nI0211 14:05:48.182922    1214 log.go:172] (0xc000600be0) (3) Data frame sent\nI0211 14:05:48.449525    1214 log.go:172] (0xc000138fd0) Data frame received for 1\nI0211 14:05:48.450088    1214 log.go:172] (0xc000600b40) (1) Data frame handling\nI0211 14:05:48.450140    1214 log.go:172] (0xc000600b40) (1) Data frame sent\nI0211 14:05:48.451036    1214 log.go:172] (0xc000138fd0) (0xc000600b40) Stream removed, broadcasting: 1\nI0211 14:05:48.451915    1214 log.go:172] (0xc000138fd0) (0xc0003a4140) Stream removed, broadcasting: 5\nI0211 14:05:48.452492    1214 log.go:172] (0xc000138fd0) (0xc000600be0) Stream removed, broadcasting: 3\nI0211 14:05:48.452568    1214 log.go:172] (0xc000138fd0) Go away received\nI0211 14:05:48.452974    1214 log.go:172] (0xc000138fd0) (0xc000600b40) Stream removed, broadcasting: 1\nI0211 14:05:48.453049    1214 log.go:172] (0xc000138fd0) (0xc000600be0) Stream removed, broadcasting: 3\nI0211 14:05:48.453078    1214 log.go:172] (0xc000138fd0) (0xc0003a4140) Stream removed, broadcasting: 5\n"
Feb 11 14:05:48.471: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:05:48.471: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:05:48.471: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:05:48.484: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:05:48.485: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:05:48.485: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:05:48.514: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999749s
Feb 11 14:05:49.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980667044s
Feb 11 14:05:50.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971581186s
Feb 11 14:05:51.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950787554s
Feb 11 14:05:52.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937800375s
Feb 11 14:05:53.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.914682676s
Feb 11 14:05:54.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.89927684s
Feb 11 14:05:55.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.872169661s
Feb 11 14:05:56.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.858187048s
Feb 11 14:05:57.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 847.713656ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9386
Feb 11 14:05:58.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:05:59.304: INFO: stderr: "I0211 14:05:58.877174    1235 log.go:172] (0xc000584420) (0xc00033e6e0) Create stream\nI0211 14:05:58.877548    1235 log.go:172] (0xc000584420) (0xc00033e6e0) Stream added, broadcasting: 1\nI0211 14:05:58.895271    1235 log.go:172] (0xc000584420) Reply frame received for 1\nI0211 14:05:58.895460    1235 log.go:172] (0xc000584420) (0xc0005fc320) Create stream\nI0211 14:05:58.895488    1235 log.go:172] (0xc000584420) (0xc0005fc320) Stream added, broadcasting: 3\nI0211 14:05:58.903939    1235 log.go:172] (0xc000584420) Reply frame received for 3\nI0211 14:05:58.904267    1235 log.go:172] (0xc000584420) (0xc00097c000) Create stream\nI0211 14:05:58.906707    1235 log.go:172] (0xc000584420) (0xc00097c000) Stream added, broadcasting: 5\nI0211 14:05:58.920652    1235 log.go:172] (0xc000584420) Reply frame received for 5\nI0211 14:05:59.120369    1235 log.go:172] (0xc000584420) Data frame received for 3\nI0211 14:05:59.120477    1235 log.go:172] (0xc0005fc320) (3) Data frame handling\nI0211 14:05:59.120532    1235 log.go:172] (0xc0005fc320) (3) Data frame sent\nI0211 14:05:59.132936    1235 log.go:172] (0xc000584420) Data frame received for 5\nI0211 14:05:59.133055    1235 log.go:172] (0xc00097c000) (5) Data frame handling\nI0211 14:05:59.133086    1235 log.go:172] (0xc00097c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0211 14:05:59.290483    1235 log.go:172] (0xc000584420) Data frame received for 1\nI0211 14:05:59.290660    1235 log.go:172] (0xc00033e6e0) (1) Data frame handling\nI0211 14:05:59.290714    1235 log.go:172] (0xc00033e6e0) (1) Data frame sent\nI0211 14:05:59.290780    1235 log.go:172] (0xc000584420) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0211 14:05:59.290933    1235 log.go:172] (0xc000584420) (0xc0005fc320) Stream removed, broadcasting: 3\nI0211 14:05:59.291349    1235 log.go:172] (0xc000584420) (0xc00097c000) Stream removed, broadcasting: 5\nI0211 14:05:59.291476    1235 log.go:172] (0xc000584420) Go away received\nI0211 14:05:59.292598    1235 log.go:172] (0xc000584420) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0211 14:05:59.292702    1235 log.go:172] (0xc000584420) (0xc0005fc320) Stream removed, broadcasting: 3\nI0211 14:05:59.292736    1235 log.go:172] (0xc000584420) (0xc00097c000) Stream removed, broadcasting: 5\n"
Feb 11 14:05:59.304: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 14:05:59.304: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 14:05:59.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:05:59.697: INFO: stderr: "I0211 14:05:59.461145    1255 log.go:172] (0xc0009dc420) (0xc000890780) Create stream\nI0211 14:05:59.461339    1255 log.go:172] (0xc0009dc420) (0xc000890780) Stream added, broadcasting: 1\nI0211 14:05:59.465783    1255 log.go:172] (0xc0009dc420) Reply frame received for 1\nI0211 14:05:59.465917    1255 log.go:172] (0xc0009dc420) (0xc0002cc3c0) Create stream\nI0211 14:05:59.465939    1255 log.go:172] (0xc0009dc420) (0xc0002cc3c0) Stream added, broadcasting: 3\nI0211 14:05:59.467328    1255 log.go:172] (0xc0009dc420) Reply frame received for 3\nI0211 14:05:59.467376    1255 log.go:172] (0xc0009dc420) (0xc000890820) Create stream\nI0211 14:05:59.467390    1255 log.go:172] (0xc0009dc420) (0xc000890820) Stream added, broadcasting: 5\nI0211 14:05:59.468959    1255 log.go:172] (0xc0009dc420) Reply frame received for 5\nI0211 14:05:59.559764    1255 log.go:172] (0xc0009dc420) Data frame received for 3\nI0211 14:05:59.559865    1255 log.go:172] (0xc0002cc3c0) (3) Data frame handling\nI0211 14:05:59.559911    1255 log.go:172] (0xc0002cc3c0) (3) Data frame sent\nI0211 14:05:59.561217    1255 log.go:172] (0xc0009dc420) Data frame received for 5\nI0211 14:05:59.561265    1255 log.go:172] (0xc000890820) (5) Data frame handling\nI0211 14:05:59.561285    1255 log.go:172] (0xc000890820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0211 14:05:59.679449    1255 log.go:172] (0xc0009dc420) (0xc0002cc3c0) Stream removed, broadcasting: 3\nI0211 14:05:59.679909    1255 log.go:172] (0xc0009dc420) Data frame received for 1\nI0211 14:05:59.679982    1255 log.go:172] (0xc0009dc420) (0xc000890820) Stream removed, broadcasting: 5\nI0211 14:05:59.680047    1255 log.go:172] (0xc000890780) (1) Data frame handling\nI0211 14:05:59.680099    1255 log.go:172] (0xc000890780) (1) Data frame sent\nI0211 14:05:59.680111    1255 log.go:172] (0xc0009dc420) (0xc000890780) Stream removed, broadcasting: 1\nI0211 14:05:59.680139    1255 log.go:172] (0xc0009dc420) Go away received\nI0211 14:05:59.681701    1255 log.go:172] (0xc0009dc420) (0xc000890780) Stream removed, broadcasting: 1\nI0211 14:05:59.681727    1255 log.go:172] (0xc0009dc420) (0xc0002cc3c0) Stream removed, broadcasting: 3\nI0211 14:05:59.681744    1255 log.go:172] (0xc0009dc420) (0xc000890820) Stream removed, broadcasting: 5\n"
Feb 11 14:05:59.697: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 14:05:59.698: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 14:05:59.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:06:00.440: INFO: stderr: "I0211 14:05:59.969966    1275 log.go:172] (0xc000a600b0) (0xc00061e320) Create stream\nI0211 14:05:59.970334    1275 log.go:172] (0xc000a600b0) (0xc00061e320) Stream added, broadcasting: 1\nI0211 14:05:59.977278    1275 log.go:172] (0xc000a600b0) Reply frame received for 1\nI0211 14:05:59.977387    1275 log.go:172] (0xc000a600b0) (0xc0006201e0) Create stream\nI0211 14:05:59.977413    1275 log.go:172] (0xc000a600b0) (0xc0006201e0) Stream added, broadcasting: 3\nI0211 14:05:59.978958    1275 log.go:172] (0xc000a600b0) Reply frame received for 3\nI0211 14:05:59.979011    1275 log.go:172] (0xc000a600b0) (0xc000620280) Create stream\nI0211 14:05:59.979021    1275 log.go:172] (0xc000a600b0) (0xc000620280) Stream added, broadcasting: 5\nI0211 14:05:59.980685    1275 log.go:172] (0xc000a600b0) Reply frame received for 5\nI0211 14:06:00.126382    1275 log.go:172] (0xc000a600b0) Data frame received for 5\nI0211 14:06:00.126762    1275 log.go:172] (0xc000620280) (5) Data frame handling\nI0211 14:06:00.126824    1275 log.go:172] (0xc000620280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0211 14:06:00.131690    1275 log.go:172] (0xc000a600b0) Data frame received for 3\nI0211 14:06:00.131762    1275 log.go:172] (0xc0006201e0) (3) Data frame handling\nI0211 14:06:00.131820    1275 log.go:172] (0xc0006201e0) (3) Data frame sent\nI0211 14:06:00.425779    1275 log.go:172] (0xc000a600b0) (0xc0006201e0) Stream removed, broadcasting: 3\nI0211 14:06:00.426190    1275 log.go:172] (0xc000a600b0) Data frame received for 1\nI0211 14:06:00.426416    1275 log.go:172] (0xc000a600b0) (0xc000620280) Stream removed, broadcasting: 5\nI0211 14:06:00.426469    1275 log.go:172] (0xc00061e320) (1) Data frame handling\nI0211 14:06:00.426515    1275 log.go:172] (0xc00061e320) (1) Data frame sent\nI0211 14:06:00.426543    1275 log.go:172] (0xc000a600b0) (0xc00061e320) Stream removed, broadcasting: 1\nI0211 14:06:00.426611    1275 log.go:172] (0xc000a600b0) Go away received\nI0211 14:06:00.427971    1275 log.go:172] (0xc000a600b0) (0xc00061e320) Stream removed, broadcasting: 1\nI0211 14:06:00.427984    1275 log.go:172] (0xc000a600b0) (0xc0006201e0) Stream removed, broadcasting: 3\nI0211 14:06:00.427988    1275 log.go:172] (0xc000a600b0) (0xc000620280) Stream removed, broadcasting: 5\n"
Feb 11 14:06:00.441: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 14:06:00.441: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 14:06:00.441: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 11 14:06:30.522: INFO: Deleting all statefulset in ns statefulset-9386
Feb 11 14:06:30.531: INFO: Scaling statefulset ss to 0
Feb 11 14:06:30.550: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:06:30.553: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:06:30.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9386" for this suite.
Feb 11 14:06:38.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:06:38.905: INFO: namespace statefulset-9386 deletion completed in 8.303534794s

• [SLOW TEST:106.499 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:06:38.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 11 14:06:39.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1266'
Feb 11 14:06:39.217: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 11 14:06:39.217: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 11 14:06:39.276: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-9mjbv]
Feb 11 14:06:39.276: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-9mjbv" in namespace "kubectl-1266" to be "running and ready"
Feb 11 14:06:39.283: INFO: Pod "e2e-test-nginx-rc-9mjbv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.555096ms
Feb 11 14:06:41.320: INFO: Pod "e2e-test-nginx-rc-9mjbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043370924s
Feb 11 14:06:43.334: INFO: Pod "e2e-test-nginx-rc-9mjbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057571467s
Feb 11 14:06:45.349: INFO: Pod "e2e-test-nginx-rc-9mjbv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072501865s
Feb 11 14:06:47.359: INFO: Pod "e2e-test-nginx-rc-9mjbv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08274357s
Feb 11 14:06:49.367: INFO: Pod "e2e-test-nginx-rc-9mjbv": Phase="Running", Reason="", readiness=true. Elapsed: 10.090448616s
Feb 11 14:06:49.367: INFO: Pod "e2e-test-nginx-rc-9mjbv" satisfied condition "running and ready"
Feb 11 14:06:49.367: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-9mjbv]
Feb 11 14:06:49.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1266'
Feb 11 14:06:49.559: INFO: stderr: ""
Feb 11 14:06:49.559: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 11 14:06:49.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1266'
Feb 11 14:06:49.714: INFO: stderr: ""
Feb 11 14:06:49.715: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:06:49.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1266" for this suite.
Feb 11 14:07:11.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:07:11.952: INFO: namespace kubectl-1266 deletion completed in 22.232192595s

• [SLOW TEST:33.047 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:07:11.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-90511e5b-7168-42e6-8de6-8a486d76e782
STEP: Creating a pod to test consume secrets
Feb 11 14:07:12.072: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968" in namespace "projected-6539" to be "success or failure"
Feb 11 14:07:12.897: INFO: Pod "pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968": Phase="Pending", Reason="", readiness=false. Elapsed: 824.860694ms
Feb 11 14:07:14.907: INFO: Pod "pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.834145368s
Feb 11 14:07:16.912: INFO: Pod "pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968": Phase="Pending", Reason="", readiness=false. Elapsed: 4.839599334s
Feb 11 14:07:18.928: INFO: Pod "pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968": Phase="Pending", Reason="", readiness=false. Elapsed: 6.855118125s
Feb 11 14:07:20.941: INFO: Pod "pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.868091423s
STEP: Saw pod success
Feb 11 14:07:20.941: INFO: Pod "pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968" satisfied condition "success or failure"
Feb 11 14:07:20.948: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 14:07:21.097: INFO: Waiting for pod pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968 to disappear
Feb 11 14:07:21.118: INFO: Pod pod-projected-secrets-9ca81a70-f965-4f9d-adee-8d0121966968 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:07:21.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6539" for this suite.
Feb 11 14:07:27.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:07:27.367: INFO: namespace projected-6539 deletion completed in 6.23651852s

• [SLOW TEST:15.414 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:07:27.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-756c66d3-6d3b-4aba-9c12-e031212c6ed1 in namespace container-probe-6443
Feb 11 14:07:35.601: INFO: Started pod test-webserver-756c66d3-6d3b-4aba-9c12-e031212c6ed1 in namespace container-probe-6443
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 14:07:35.607: INFO: Initial restart count of pod test-webserver-756c66d3-6d3b-4aba-9c12-e031212c6ed1 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:11:36.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6443" for this suite.
Feb 11 14:11:42.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:11:42.229: INFO: namespace container-probe-6443 deletion completed in 6.162330631s

• [SLOW TEST:254.861 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:11:42.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 11 14:11:42.347: INFO: PodSpec: initContainers in spec.initContainers
Feb 11 14:12:49.707: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a063c6c7-e11e-442d-87b5-7c9113b94382", GenerateName:"", Namespace:"init-container-4457", SelfLink:"/api/v1/namespaces/init-container-4457/pods/pod-init-a063c6c7-e11e-442d-87b5-7c9113b94382", UID:"49e5a777-a499-4dee-80a7-f1601e34af96", ResourceVersion:"23955149", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717027102, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"347390243"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-spgc9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002226500), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-spgc9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-spgc9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-spgc9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019a4298), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002674300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019a4320)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019a4340)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0019a4348), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019a434c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717027102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717027102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717027102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717027102, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00189a360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00179a150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00179a1c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://23ae5bd4f1187705c473552e8719641225cc80224533641ad596ec28e43c25c7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00189a440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00189a420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:12:49.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4457" for this suite.
Feb 11 14:13:11.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:13:11.952: INFO: namespace init-container-4457 deletion completed in 22.159966138s

• [SLOW TEST:89.719 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:13:11.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:13:12.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f" in namespace "downward-api-2466" to be "success or failure"
Feb 11 14:13:12.115: INFO: Pod "downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.913073ms
Feb 11 14:13:14.127: INFO: Pod "downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03855917s
Feb 11 14:13:16.146: INFO: Pod "downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057529763s
Feb 11 14:13:18.159: INFO: Pod "downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071039078s
Feb 11 14:13:20.179: INFO: Pod "downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090930801s
STEP: Saw pod success
Feb 11 14:13:20.180: INFO: Pod "downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f" satisfied condition "success or failure"
Feb 11 14:13:20.185: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f container client-container: 
STEP: delete the pod
Feb 11 14:13:20.246: INFO: Waiting for pod downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f to disappear
Feb 11 14:13:20.254: INFO: Pod downwardapi-volume-f0978074-ba98-4be1-8cbc-cac25bf0673f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:13:20.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2466" for this suite.
Feb 11 14:13:26.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:13:26.487: INFO: namespace downward-api-2466 deletion completed in 6.22343222s

• [SLOW TEST:14.535 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:13:26.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0211 14:14:09.151910       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 14:14:09.152: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:14:09.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8084" for this suite.
Feb 11 14:14:17.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:14:18.817: INFO: namespace gc-8084 deletion completed in 9.6619927s

• [SLOW TEST:52.330 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:14:18.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ba0bae54-59ef-4441-9bd9-5d21f6664cb0
STEP: Creating a pod to test consume configMaps
Feb 11 14:14:19.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6" in namespace "configmap-4357" to be "success or failure"
Feb 11 14:14:19.940: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Pending", Reason="", readiness=false. Elapsed: 226.07197ms
Feb 11 14:14:22.557: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843079766s
Feb 11 14:14:24.714: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.000301266s
Feb 11 14:14:26.723: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.008679349s
Feb 11 14:14:28.738: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.024214802s
Feb 11 14:14:30.745: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.031444063s
Feb 11 14:14:32.754: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.040312458s
Feb 11 14:14:34.765: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.051169917s
STEP: Saw pod success
Feb 11 14:14:34.765: INFO: Pod "pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6" satisfied condition "success or failure"
Feb 11 14:14:34.770: INFO: Trying to get logs from node iruya-node pod pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6 container configmap-volume-test: 
STEP: delete the pod
Feb 11 14:14:34.859: INFO: Waiting for pod pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6 to disappear
Feb 11 14:14:34.868: INFO: Pod pod-configmaps-50f400cd-080b-43d7-b850-96d53c6836d6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:14:34.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4357" for this suite.
Feb 11 14:14:40.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:14:41.109: INFO: namespace configmap-4357 deletion completed in 6.225842404s

• [SLOW TEST:22.291 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:14:41.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 11 14:14:41.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-910'
Feb 11 14:14:41.428: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 11 14:14:41.428: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 11 14:14:41.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-910'
Feb 11 14:14:41.690: INFO: stderr: ""
Feb 11 14:14:41.690: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:14:41.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-910" for this suite.
Feb 11 14:14:47.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:14:47.954: INFO: namespace kubectl-910 deletion completed in 6.256181234s

• [SLOW TEST:6.843 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:14:47.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 11 14:14:48.212: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8538,SelfLink:/api/v1/namespaces/watch-8538/configmaps/e2e-watch-test-label-changed,UID:b8f26c32-ce8e-4da3-b098-fd4e823c29c3,ResourceVersion:23955573,Generation:0,CreationTimestamp:2020-02-11 14:14:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 11 14:14:48.213: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8538,SelfLink:/api/v1/namespaces/watch-8538/configmaps/e2e-watch-test-label-changed,UID:b8f26c32-ce8e-4da3-b098-fd4e823c29c3,ResourceVersion:23955574,Generation:0,CreationTimestamp:2020-02-11 14:14:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 11 14:14:48.214: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8538,SelfLink:/api/v1/namespaces/watch-8538/configmaps/e2e-watch-test-label-changed,UID:b8f26c32-ce8e-4da3-b098-fd4e823c29c3,ResourceVersion:23955575,Generation:0,CreationTimestamp:2020-02-11 14:14:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 11 14:14:58.270: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8538,SelfLink:/api/v1/namespaces/watch-8538/configmaps/e2e-watch-test-label-changed,UID:b8f26c32-ce8e-4da3-b098-fd4e823c29c3,ResourceVersion:23955590,Generation:0,CreationTimestamp:2020-02-11 14:14:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 11 14:14:58.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8538,SelfLink:/api/v1/namespaces/watch-8538/configmaps/e2e-watch-test-label-changed,UID:b8f26c32-ce8e-4da3-b098-fd4e823c29c3,ResourceVersion:23955591,Generation:0,CreationTimestamp:2020-02-11 14:14:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 11 14:14:58.271: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8538,SelfLink:/api/v1/namespaces/watch-8538/configmaps/e2e-watch-test-label-changed,UID:b8f26c32-ce8e-4da3-b098-fd4e823c29c3,ResourceVersion:23955592,Generation:0,CreationTimestamp:2020-02-11 14:14:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:14:58.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8538" for this suite.
Feb 11 14:15:04.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:15:04.419: INFO: namespace watch-8538 deletion completed in 6.141762591s

• [SLOW TEST:16.465 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:15:04.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 11 14:15:04.488: INFO: Waiting up to 5m0s for pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28" in namespace "emptydir-9917" to be "success or failure"
Feb 11 14:15:04.496: INFO: Pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346795ms
Feb 11 14:15:06.519: INFO: Pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031002011s
Feb 11 14:15:08.543: INFO: Pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055420669s
Feb 11 14:15:10.559: INFO: Pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071279327s
Feb 11 14:15:12.588: INFO: Pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28": Phase="Running", Reason="", readiness=true. Elapsed: 8.09963798s
Feb 11 14:15:14.622: INFO: Pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133693527s
STEP: Saw pod success
Feb 11 14:15:14.622: INFO: Pod "pod-411518ae-1988-4420-96f3-b255ee4a2b28" satisfied condition "success or failure"
Feb 11 14:15:14.637: INFO: Trying to get logs from node iruya-node pod pod-411518ae-1988-4420-96f3-b255ee4a2b28 container test-container: 
STEP: delete the pod
Feb 11 14:15:14.776: INFO: Waiting for pod pod-411518ae-1988-4420-96f3-b255ee4a2b28 to disappear
Feb 11 14:15:14.785: INFO: Pod pod-411518ae-1988-4420-96f3-b255ee4a2b28 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:15:14.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9917" for this suite.
Feb 11 14:15:20.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:15:21.011: INFO: namespace emptydir-9917 deletion completed in 6.219116659s

• [SLOW TEST:16.591 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:15:21.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 11 14:15:21.154: INFO: Waiting up to 5m0s for pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de" in namespace "containers-6494" to be "success or failure"
Feb 11 14:15:21.159: INFO: Pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de": Phase="Pending", Reason="", readiness=false. Elapsed: 5.484399ms
Feb 11 14:15:23.172: INFO: Pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018250388s
Feb 11 14:15:25.186: INFO: Pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032045719s
Feb 11 14:15:27.196: INFO: Pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042348943s
Feb 11 14:15:29.206: INFO: Pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052052694s
Feb 11 14:15:31.216: INFO: Pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062173581s
STEP: Saw pod success
Feb 11 14:15:31.216: INFO: Pod "client-containers-38f27364-463c-4b85-8b7d-2d0099b079de" satisfied condition "success or failure"
Feb 11 14:15:31.222: INFO: Trying to get logs from node iruya-node pod client-containers-38f27364-463c-4b85-8b7d-2d0099b079de container test-container: 
STEP: delete the pod
Feb 11 14:15:31.277: INFO: Waiting for pod client-containers-38f27364-463c-4b85-8b7d-2d0099b079de to disappear
Feb 11 14:15:31.280: INFO: Pod client-containers-38f27364-463c-4b85-8b7d-2d0099b079de no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:15:31.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6494" for this suite.
Feb 11 14:15:37.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:15:37.557: INFO: namespace containers-6494 deletion completed in 6.270134901s

• [SLOW TEST:16.545 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:15:37.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-673f50f5-960d-4d01-9f52-9340d00072e3
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:15:37.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3654" for this suite.
Feb 11 14:15:43.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:15:43.931: INFO: namespace secrets-3654 deletion completed in 6.199191927s

• [SLOW TEST:6.374 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:15:43.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 11 14:15:44.083: INFO: Waiting up to 5m0s for pod "pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4" in namespace "emptydir-2157" to be "success or failure"
Feb 11 14:15:44.099: INFO: Pod "pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.679993ms
Feb 11 14:15:46.155: INFO: Pod "pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072610479s
Feb 11 14:15:48.165: INFO: Pod "pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082413381s
Feb 11 14:15:50.173: INFO: Pod "pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090337457s
Feb 11 14:15:52.185: INFO: Pod "pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102532167s
STEP: Saw pod success
Feb 11 14:15:52.186: INFO: Pod "pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4" satisfied condition "success or failure"
Feb 11 14:15:52.190: INFO: Trying to get logs from node iruya-node pod pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4 container test-container: 
STEP: delete the pod
Feb 11 14:15:52.304: INFO: Waiting for pod pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4 to disappear
Feb 11 14:15:52.355: INFO: Pod pod-b43e9c62-6386-4d17-b7a0-876b14acf1d4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:15:52.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2157" for this suite.
Feb 11 14:15:58.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:15:58.631: INFO: namespace emptydir-2157 deletion completed in 6.266439427s

• [SLOW TEST:14.699 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:15:58.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4641.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4641.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4641.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4641.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4641.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4641.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 14:16:13.008: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f: the server could not find the requested resource (get pods dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f)
Feb 11 14:16:13.019: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f: the server could not find the requested resource (get pods dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f)
Feb 11 14:16:13.046: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-4641.svc.cluster.local from pod dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f: the server could not find the requested resource (get pods dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f)
Feb 11 14:16:13.075: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f: the server could not find the requested resource (get pods dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f)
Feb 11 14:16:13.089: INFO: Unable to read jessie_udp@PodARecord from pod dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f: the server could not find the requested resource (get pods dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f)
Feb 11 14:16:13.093: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f: the server could not find the requested resource (get pods dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f)
Feb 11 14:16:13.093: INFO: Lookups using dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-4641.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 11 14:16:18.166: INFO: DNS probes using dns-4641/dns-test-8734371e-7b3f-47d8-8e47-61b29f162a9f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:16:18.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4641" for this suite.
Feb 11 14:16:24.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:16:24.613: INFO: namespace dns-4641 deletion completed in 6.266768497s

• [SLOW TEST:25.981 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:16:24.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:16:24.740: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:16:25.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8407" for this suite.
Feb 11 14:16:31.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:16:32.229: INFO: namespace custom-resource-definition-8407 deletion completed in 6.330684663s

• [SLOW TEST:7.615 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:16:32.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 11 14:16:40.384: INFO: Pod pod-hostip-11c6f423-163a-4367-a7d8-3246f2ad4e7f has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:16:40.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3933" for this suite.
Feb 11 14:17:02.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:17:02.550: INFO: namespace pods-3933 deletion completed in 22.160880002s

• [SLOW TEST:30.320 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:17:02.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:17:03.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569" in namespace "projected-2779" to be "success or failure"
Feb 11 14:17:03.493: INFO: Pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569": Phase="Pending", Reason="", readiness=false. Elapsed: 24.520181ms
Feb 11 14:17:05.503: INFO: Pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034552221s
Feb 11 14:17:07.510: INFO: Pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042300887s
Feb 11 14:17:09.520: INFO: Pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051894702s
Feb 11 14:17:11.526: INFO: Pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058079647s
Feb 11 14:17:13.563: INFO: Pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095187119s
STEP: Saw pod success
Feb 11 14:17:13.564: INFO: Pod "downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569" satisfied condition "success or failure"
Feb 11 14:17:13.569: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569 container client-container: 
STEP: delete the pod
Feb 11 14:17:13.631: INFO: Waiting for pod downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569 to disappear
Feb 11 14:17:13.738: INFO: Pod downwardapi-volume-a30bf222-0d71-4177-bde2-d565f1a44569 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:17:13.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2779" for this suite.
Feb 11 14:17:20.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:17:20.555: INFO: namespace projected-2779 deletion completed in 6.809369603s

• [SLOW TEST:18.004 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:17:20.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-2f5fed73-4f7d-4669-98a4-085bf5e85872 in namespace container-probe-9797
Feb 11 14:17:28.875: INFO: Started pod busybox-2f5fed73-4f7d-4669-98a4-085bf5e85872 in namespace container-probe-9797
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 14:17:28.882: INFO: Initial restart count of pod busybox-2f5fed73-4f7d-4669-98a4-085bf5e85872 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:21:29.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9797" for this suite.
Feb 11 14:21:35.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:21:35.800: INFO: namespace container-probe-9797 deletion completed in 6.190098909s

• [SLOW TEST:255.243 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:21:35.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 11 14:21:44.033: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:21:44.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6429" for this suite.
Feb 11 14:21:50.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:21:50.337: INFO: namespace container-runtime-6429 deletion completed in 6.254308523s

• [SLOW TEST:14.536 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:21:50.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:21:50.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 11 14:21:50.640: INFO: stderr: ""
Feb 11 14:21:50.641: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:21:50.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8098" for this suite.
Feb 11 14:21:56.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:21:56.788: INFO: namespace kubectl-8098 deletion completed in 6.139878532s

• [SLOW TEST:6.451 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:21:56.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:21:56.926: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 11 14:21:56.934: INFO: Number of nodes with available pods: 0
Feb 11 14:21:56.934: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 11 14:21:56.990: INFO: Number of nodes with available pods: 0
Feb 11 14:21:56.990: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:21:57.999: INFO: Number of nodes with available pods: 0
Feb 11 14:21:57.999: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:21:59.003: INFO: Number of nodes with available pods: 0
Feb 11 14:21:59.003: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:00.009: INFO: Number of nodes with available pods: 0
Feb 11 14:22:00.009: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:00.999: INFO: Number of nodes with available pods: 0
Feb 11 14:22:00.999: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:02.001: INFO: Number of nodes with available pods: 0
Feb 11 14:22:02.002: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:03.004: INFO: Number of nodes with available pods: 0
Feb 11 14:22:03.004: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:04.001: INFO: Number of nodes with available pods: 0
Feb 11 14:22:04.001: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:04.998: INFO: Number of nodes with available pods: 1
Feb 11 14:22:04.998: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 11 14:22:05.041: INFO: Number of nodes with available pods: 1
Feb 11 14:22:05.041: INFO: Number of running nodes: 0, number of available pods: 1
Feb 11 14:22:06.048: INFO: Number of nodes with available pods: 0
Feb 11 14:22:06.048: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 11 14:22:06.073: INFO: Number of nodes with available pods: 0
Feb 11 14:22:06.073: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:07.091: INFO: Number of nodes with available pods: 0
Feb 11 14:22:07.091: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:08.083: INFO: Number of nodes with available pods: 0
Feb 11 14:22:08.083: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:09.098: INFO: Number of nodes with available pods: 0
Feb 11 14:22:09.099: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:10.083: INFO: Number of nodes with available pods: 0
Feb 11 14:22:10.084: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:11.091: INFO: Number of nodes with available pods: 0
Feb 11 14:22:11.091: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:12.123: INFO: Number of nodes with available pods: 0
Feb 11 14:22:12.123: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:13.091: INFO: Number of nodes with available pods: 0
Feb 11 14:22:13.091: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:14.083: INFO: Number of nodes with available pods: 0
Feb 11 14:22:14.083: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:15.083: INFO: Number of nodes with available pods: 0
Feb 11 14:22:15.083: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:16.084: INFO: Number of nodes with available pods: 0
Feb 11 14:22:16.084: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:17.141: INFO: Number of nodes with available pods: 0
Feb 11 14:22:17.142: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:18.083: INFO: Number of nodes with available pods: 0
Feb 11 14:22:18.083: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:19.082: INFO: Number of nodes with available pods: 0
Feb 11 14:22:19.082: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:20.084: INFO: Number of nodes with available pods: 0
Feb 11 14:22:20.084: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:21.078: INFO: Number of nodes with available pods: 0
Feb 11 14:22:21.078: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:22.082: INFO: Number of nodes with available pods: 0
Feb 11 14:22:22.082: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:23.086: INFO: Number of nodes with available pods: 0
Feb 11 14:22:23.086: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:24.088: INFO: Number of nodes with available pods: 0
Feb 11 14:22:24.088: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:22:25.083: INFO: Number of nodes with available pods: 1
Feb 11 14:22:25.083: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6791, will wait for the garbage collector to delete the pods
Feb 11 14:22:25.182: INFO: Deleting DaemonSet.extensions daemon-set took: 33.185504ms
Feb 11 14:22:25.483: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.280483ms
Feb 11 14:22:36.594: INFO: Number of nodes with available pods: 0
Feb 11 14:22:36.594: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 14:22:36.601: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6791/daemonsets","resourceVersion":"23956496"},"items":null}

Feb 11 14:22:36.605: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6791/pods","resourceVersion":"23956496"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:22:36.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6791" for this suite.
Feb 11 14:22:42.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:22:43.121: INFO: namespace daemonsets-6791 deletion completed in 6.147868877s

• [SLOW TEST:46.332 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:22:43.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-831
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-831 to expose endpoints map[]
Feb 11 14:22:43.300: INFO: successfully validated that service multi-endpoint-test in namespace services-831 exposes endpoints map[] (11.125768ms elapsed)
STEP: Creating pod pod1 in namespace services-831
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-831 to expose endpoints map[pod1:[100]]
Feb 11 14:22:47.495: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.165169653s elapsed, will retry)
Feb 11 14:22:51.555: INFO: successfully validated that service multi-endpoint-test in namespace services-831 exposes endpoints map[pod1:[100]] (8.225114624s elapsed)
STEP: Creating pod pod2 in namespace services-831
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-831 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 11 14:22:56.080: INFO: Unexpected endpoints: found map[0fdea888-1176-469a-830d-c834300c27ab:[100]], expected map[pod1:[100] pod2:[101]] (4.51116465s elapsed, will retry)
Feb 11 14:22:59.206: INFO: successfully validated that service multi-endpoint-test in namespace services-831 exposes endpoints map[pod1:[100] pod2:[101]] (7.637489179s elapsed)
STEP: Deleting pod pod1 in namespace services-831
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-831 to expose endpoints map[pod2:[101]]
Feb 11 14:23:00.310: INFO: successfully validated that service multi-endpoint-test in namespace services-831 exposes endpoints map[pod2:[101]] (1.095416999s elapsed)
STEP: Deleting pod pod2 in namespace services-831
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-831 to expose endpoints map[]
Feb 11 14:23:00.427: INFO: successfully validated that service multi-endpoint-test in namespace services-831 exposes endpoints map[] (102.429156ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:23:00.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-831" for this suite.
Feb 11 14:23:22.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:23:22.680: INFO: namespace services-831 deletion completed in 22.17598922s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.558 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:23:22.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-774988aa-f89e-414a-8850-822b747e9e7c
STEP: Creating a pod to test consume configMaps
Feb 11 14:23:22.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80" in namespace "configmap-2114" to be "success or failure"
Feb 11 14:23:22.915: INFO: Pod "pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80": Phase="Pending", Reason="", readiness=false. Elapsed: 16.045107ms
Feb 11 14:23:24.928: INFO: Pod "pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028289633s
Feb 11 14:23:26.944: INFO: Pod "pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044190168s
Feb 11 14:23:28.952: INFO: Pod "pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052320391s
Feb 11 14:23:30.965: INFO: Pod "pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065671905s
STEP: Saw pod success
Feb 11 14:23:30.965: INFO: Pod "pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80" satisfied condition "success or failure"
Feb 11 14:23:30.969: INFO: Trying to get logs from node iruya-node pod pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80 container configmap-volume-test: 
STEP: delete the pod
Feb 11 14:23:31.067: INFO: Waiting for pod pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80 to disappear
Feb 11 14:23:31.079: INFO: Pod pod-configmaps-00c8e265-dd9a-488d-b10c-dd6f32b76e80 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:23:31.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2114" for this suite.
Feb 11 14:23:37.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:23:37.320: INFO: namespace configmap-2114 deletion completed in 6.232305891s

• [SLOW TEST:14.640 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:23:37.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-w84gx in namespace proxy-6047
I0211 14:23:37.583276       9 runners.go:180] Created replication controller with name: proxy-service-w84gx, namespace: proxy-6047, replica count: 1
I0211 14:23:38.634339       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 14:23:39.634957       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 14:23:40.636243       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 14:23:41.636828       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 14:23:42.637437       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 14:23:43.638021       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 14:23:44.638538       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 14:23:45.639036       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 14:23:46.639576       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 14:23:47.640330       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 14:23:48.641354       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 14:23:49.643049       9 runners.go:180] proxy-service-w84gx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 11 14:23:49.673: INFO: setup took 12.224777845s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 11 14:23:49.747: INFO: (0) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 73.799892ms)
Feb 11 14:23:49.749: INFO: (0) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 76.031725ms)
Feb 11 14:23:49.751: INFO: (0) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 77.781887ms)
Feb 11 14:23:49.751: INFO: (0) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 78.460358ms)
Feb 11 14:23:49.752: INFO: (0) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 78.21978ms)
Feb 11 14:23:49.752: INFO: (0) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 79.215497ms)
Feb 11 14:23:49.758: INFO: (0) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 85.433169ms)
Feb 11 14:23:49.760: INFO: (0) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 86.650445ms)
Feb 11 14:23:49.761: INFO: (0) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 87.669331ms)
Feb 11 14:23:49.761: INFO: (0) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 87.615046ms)
Feb 11 14:23:49.794: INFO: (0) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 121.154311ms)
Feb 11 14:23:49.796: INFO: (0) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 122.344932ms)
Feb 11 14:23:49.796: INFO: (0) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 122.265021ms)
Feb 11 14:23:49.802: INFO: (0) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 128.915711ms)
Feb 11 14:23:49.803: INFO: (0) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 129.210786ms)
Feb 11 14:23:49.867: INFO: (0) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 48.654928ms)
Feb 11 14:23:49.918: INFO: (1) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 49.250908ms)
Feb 11 14:23:49.918: INFO: (1) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 50.322774ms)
Feb 11 14:23:49.919: INFO: (1) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: ... (200; 50.182988ms)
Feb 11 14:23:49.919: INFO: (1) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 50.252005ms)
Feb 11 14:23:49.919: INFO: (1) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 51.289699ms)
Feb 11 14:23:49.919: INFO: (1) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 50.572515ms)
Feb 11 14:23:49.923: INFO: (1) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 54.502909ms)
Feb 11 14:23:49.923: INFO: (1) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 54.493055ms)
Feb 11 14:23:49.923: INFO: (1) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 54.42648ms)
Feb 11 14:23:49.923: INFO: (1) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 54.585317ms)
Feb 11 14:23:49.923: INFO: (1) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 54.58955ms)
Feb 11 14:23:49.933: INFO: (2) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 7.142308ms)
Feb 11 14:23:49.933: INFO: (2) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 9.689525ms)
Feb 11 14:23:49.933: INFO: (2) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test (200; 9.229735ms)
Feb 11 14:23:49.934: INFO: (2) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 8.733945ms)
Feb 11 14:23:49.934: INFO: (2) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 9.632118ms)
Feb 11 14:23:49.955: INFO: (2) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 28.887135ms)
Feb 11 14:23:49.955: INFO: (2) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 28.668116ms)
Feb 11 14:23:49.955: INFO: (2) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 29.548226ms)
Feb 11 14:23:49.955: INFO: (2) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 28.90327ms)
Feb 11 14:23:49.955: INFO: (2) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 29.756759ms)
Feb 11 14:23:49.955: INFO: (2) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 30.591201ms)
Feb 11 14:23:49.963: INFO: (3) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 8.329468ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 8.123747ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 8.640044ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 8.482044ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 8.725874ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 8.227003ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 8.573839ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 8.862226ms)
Feb 11 14:23:49.964: INFO: (3) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test (200; 11.932993ms)
Feb 11 14:23:49.982: INFO: (4) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 12.681574ms)
Feb 11 14:23:49.983: INFO: (4) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 12.891776ms)
Feb 11 14:23:49.983: INFO: (4) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 13.273382ms)
Feb 11 14:23:49.984: INFO: (4) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 14.160015ms)
Feb 11 14:23:49.984: INFO: (4) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 14.557756ms)
Feb 11 14:23:49.985: INFO: (4) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 15.190932ms)
Feb 11 14:23:49.986: INFO: (4) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 15.985022ms)
Feb 11 14:23:49.986: INFO: (4) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 17.115632ms)
Feb 11 14:23:50.005: INFO: (5) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 17.460327ms)
Feb 11 14:23:50.004: INFO: (5) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 17.537735ms)
Feb 11 14:23:50.005: INFO: (5) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 17.916177ms)
Feb 11 14:23:50.006: INFO: (5) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 17.984047ms)
Feb 11 14:23:50.006: INFO: (5) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 18.179461ms)
Feb 11 14:23:50.007: INFO: (5) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 19.430935ms)
Feb 11 14:23:50.007: INFO: (5) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 31.922783ms)
Feb 11 14:23:50.043: INFO: (6) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 31.782596ms)
Feb 11 14:23:50.044: INFO: (6) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 32.397661ms)
Feb 11 14:23:50.044: INFO: (6) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 32.238389ms)
Feb 11 14:23:50.045: INFO: (6) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 33.911567ms)
Feb 11 14:23:50.045: INFO: (6) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 33.564403ms)
Feb 11 14:23:50.045: INFO: (6) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 33.756485ms)
Feb 11 14:23:50.045: INFO: (6) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 33.954328ms)
Feb 11 14:23:50.045: INFO: (6) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: ... (200; 17.929404ms)
Feb 11 14:23:50.072: INFO: (7) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 17.855269ms)
Feb 11 14:23:50.073: INFO: (7) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 18.870768ms)
Feb 11 14:23:50.073: INFO: (7) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 18.904184ms)
Feb 11 14:23:50.073: INFO: (7) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 18.907445ms)
Feb 11 14:23:50.078: INFO: (7) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 27.648839ms)
Feb 11 14:23:50.085: INFO: (7) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 31.487563ms)
Feb 11 14:23:50.093: INFO: (7) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 38.210411ms)
Feb 11 14:23:50.094: INFO: (7) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 39.651872ms)
Feb 11 14:23:50.145: INFO: (8) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 49.151654ms)
Feb 11 14:23:50.145: INFO: (8) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 50.652683ms)
Feb 11 14:23:50.145: INFO: (8) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 50.300258ms)
Feb 11 14:23:50.145: INFO: (8) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 50.864546ms)
Feb 11 14:23:50.145: INFO: (8) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 50.383462ms)
Feb 11 14:23:50.146: INFO: (8) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 50.306013ms)
Feb 11 14:23:50.146: INFO: (8) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 50.425363ms)
Feb 11 14:23:50.146: INFO: (8) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 50.9997ms)
Feb 11 14:23:50.146: INFO: (8) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 51.665543ms)
Feb 11 14:23:50.146: INFO: (8) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 51.143158ms)
Feb 11 14:23:50.168: INFO: (9) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 21.513347ms)
Feb 11 14:23:50.169: INFO: (9) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 21.48726ms)
Feb 11 14:23:50.169: INFO: (9) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 22.586442ms)
Feb 11 14:23:50.171: INFO: (9) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 23.361041ms)
Feb 11 14:23:50.171: INFO: (9) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 23.554989ms)
Feb 11 14:23:50.171: INFO: (9) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 23.630815ms)
Feb 11 14:23:50.171: INFO: (9) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 23.66824ms)
Feb 11 14:23:50.172: INFO: (9) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 24.315417ms)
Feb 11 14:23:50.172: INFO: (9) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: ... (200; 24.87142ms)
Feb 11 14:23:50.172: INFO: (9) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 25.188774ms)
Feb 11 14:23:50.173: INFO: (9) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 25.522041ms)
Feb 11 14:23:50.173: INFO: (9) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 26.433537ms)
Feb 11 14:23:50.172: INFO: (9) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 25.423379ms)
Feb 11 14:23:50.174: INFO: (9) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 26.75953ms)
Feb 11 14:23:50.174: INFO: (9) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 26.236353ms)
Feb 11 14:23:50.184: INFO: (10) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 10.081713ms)
Feb 11 14:23:50.185: INFO: (10) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 10.773959ms)
Feb 11 14:23:50.186: INFO: (10) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 11.858845ms)
Feb 11 14:23:50.186: INFO: (10) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 11.889988ms)
Feb 11 14:23:50.196: INFO: (10) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 22.011408ms)
Feb 11 14:23:50.196: INFO: (10) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 22.520718ms)
Feb 11 14:23:50.196: INFO: (10) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 22.108066ms)
Feb 11 14:23:50.196: INFO: (10) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 22.185567ms)
Feb 11 14:23:50.196: INFO: (10) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 21.974084ms)
Feb 11 14:23:50.196: INFO: (10) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 23.67122ms)
Feb 11 14:23:50.198: INFO: (10) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 24.220248ms)
Feb 11 14:23:50.200: INFO: (10) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 25.63275ms)
Feb 11 14:23:50.204: INFO: (10) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 29.646612ms)
Feb 11 14:23:50.214: INFO: (11) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 9.897916ms)
Feb 11 14:23:50.214: INFO: (11) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 9.929497ms)
Feb 11 14:23:50.216: INFO: (11) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 12.039984ms)
Feb 11 14:23:50.217: INFO: (11) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 12.886685ms)
Feb 11 14:23:50.222: INFO: (11) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 17.931621ms)
Feb 11 14:23:50.223: INFO: (11) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 18.644069ms)
Feb 11 14:23:50.223: INFO: (11) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 19.079504ms)
Feb 11 14:23:50.224: INFO: (11) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 19.634413ms)
Feb 11 14:23:50.224: INFO: (11) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 19.538327ms)
Feb 11 14:23:50.224: INFO: (11) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 19.876719ms)
Feb 11 14:23:50.249: INFO: (12) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test (200; 24.719191ms)
Feb 11 14:23:50.254: INFO: (12) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 25.19449ms)
Feb 11 14:23:50.254: INFO: (12) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 25.41759ms)
Feb 11 14:23:50.254: INFO: (12) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 25.98911ms)
Feb 11 14:23:50.254: INFO: (12) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 26.314299ms)
Feb 11 14:23:50.255: INFO: (12) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 26.07375ms)
Feb 11 14:23:50.255: INFO: (12) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 25.952089ms)
Feb 11 14:23:50.255: INFO: (12) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 26.740492ms)
Feb 11 14:23:50.255: INFO: (12) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 26.952201ms)
Feb 11 14:23:50.262: INFO: (13) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 6.321935ms)
Feb 11 14:23:50.262: INFO: (13) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: ... (200; 16.274794ms)
Feb 11 14:23:50.272: INFO: (13) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 16.510319ms)
Feb 11 14:23:50.272: INFO: (13) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 16.459289ms)
Feb 11 14:23:50.275: INFO: (13) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 19.443908ms)
Feb 11 14:23:50.275: INFO: (13) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 19.480018ms)
Feb 11 14:23:50.276: INFO: (13) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 20.126908ms)
Feb 11 14:23:50.277: INFO: (13) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 21.269283ms)
Feb 11 14:23:50.277: INFO: (13) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 21.318259ms)
Feb 11 14:23:50.278: INFO: (13) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 21.879234ms)
Feb 11 14:23:50.278: INFO: (13) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 22.189635ms)
Feb 11 14:23:50.293: INFO: (14) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 14.986823ms)
Feb 11 14:23:50.294: INFO: (14) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 16.096673ms)
Feb 11 14:23:50.294: INFO: (14) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 16.064568ms)
Feb 11 14:23:50.294: INFO: (14) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 15.922129ms)
Feb 11 14:23:50.298: INFO: (14) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 19.360167ms)
Feb 11 14:23:50.301: INFO: (14) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test (200; 22.766472ms)
Feb 11 14:23:50.301: INFO: (14) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 22.881946ms)
Feb 11 14:23:50.301: INFO: (14) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 22.980331ms)
Feb 11 14:23:50.302: INFO: (14) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 23.058756ms)
Feb 11 14:23:50.302: INFO: (14) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 23.191848ms)
Feb 11 14:23:50.302: INFO: (14) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 23.37097ms)
Feb 11 14:23:50.302: INFO: (14) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 23.68478ms)
Feb 11 14:23:50.302: INFO: (14) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 23.51882ms)
Feb 11 14:23:50.305: INFO: (14) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 26.920128ms)
Feb 11 14:23:50.305: INFO: (14) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 27.208762ms)
Feb 11 14:23:50.323: INFO: (15) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 16.15611ms)
Feb 11 14:23:50.325: INFO: (15) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 18.52438ms)
Feb 11 14:23:50.325: INFO: (15) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 18.314203ms)
Feb 11 14:23:50.325: INFO: (15) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: ... (200; 18.348136ms)
Feb 11 14:23:50.325: INFO: (15) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 18.813624ms)
Feb 11 14:23:50.327: INFO: (15) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 20.002095ms)
Feb 11 14:23:50.327: INFO: (15) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 20.374536ms)
Feb 11 14:23:50.327: INFO: (15) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 20.910056ms)
Feb 11 14:23:50.328: INFO: (15) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 21.334111ms)
Feb 11 14:23:50.328: INFO: (15) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 21.252406ms)
Feb 11 14:23:50.328: INFO: (15) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 21.171142ms)
Feb 11 14:23:50.337: INFO: (16) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 8.87728ms)
Feb 11 14:23:50.337: INFO: (16) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 8.69013ms)
Feb 11 14:23:50.337: INFO: (16) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 8.676064ms)
Feb 11 14:23:50.337: INFO: (16) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 8.955712ms)
Feb 11 14:23:50.337: INFO: (16) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 9.229615ms)
Feb 11 14:23:50.337: INFO: (16) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 9.671061ms)
Feb 11 14:23:50.337: INFO: (16) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 9.474881ms)
Feb 11 14:23:50.338: INFO: (16) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 9.815039ms)
Feb 11 14:23:50.341: INFO: (16) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 13.631653ms)
Feb 11 14:23:50.342: INFO: (16) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 14.163596ms)
Feb 11 14:23:50.342: INFO: (16) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 13.832023ms)
Feb 11 14:23:50.342: INFO: (16) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 14.335628ms)
Feb 11 14:23:50.343: INFO: (16) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 15.110067ms)
Feb 11 14:23:50.343: INFO: (16) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 15.237178ms)
Feb 11 14:23:50.353: INFO: (17) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 9.345292ms)
Feb 11 14:23:50.357: INFO: (17) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 12.803316ms)
Feb 11 14:23:50.357: INFO: (17) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname2/proxy/: bar (200; 13.488037ms)
Feb 11 14:23:50.358: INFO: (17) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 14.156517ms)
Feb 11 14:23:50.358: INFO: (17) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 14.187371ms)
Feb 11 14:23:50.359: INFO: (17) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 14.99406ms)
Feb 11 14:23:50.359: INFO: (17) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 15.029201ms)
Feb 11 14:23:50.359: INFO: (17) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: ... (200; 15.645753ms)
Feb 11 14:23:50.359: INFO: (17) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 15.609235ms)
Feb 11 14:23:50.359: INFO: (17) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 15.521831ms)
Feb 11 14:23:50.360: INFO: (17) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 15.757087ms)
Feb 11 14:23:50.360: INFO: (17) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname1/proxy/: foo (200; 16.077336ms)
Feb 11 14:23:50.366: INFO: (18) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 6.556939ms)
Feb 11 14:23:50.367: INFO: (18) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 6.994294ms)
Feb 11 14:23:50.367: INFO: (18) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 6.846221ms)
Feb 11 14:23:50.367: INFO: (18) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 7.0368ms)
Feb 11 14:23:50.367: INFO: (18) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 7.103385ms)
Feb 11 14:23:50.367: INFO: (18) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 7.045571ms)
Feb 11 14:23:50.367: INFO: (18) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 7.360227ms)
Feb 11 14:23:50.367: INFO: (18) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: test<... (200; 10.598001ms)
Feb 11 14:23:50.371: INFO: (18) /api/v1/namespaces/proxy-6047/services/http:proxy-service-w84gx:portname2/proxy/: bar (200; 11.19738ms)
Feb 11 14:23:50.373: INFO: (18) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname2/proxy/: tls qux (200; 13.100313ms)
Feb 11 14:23:50.373: INFO: (18) /api/v1/namespaces/proxy-6047/services/proxy-service-w84gx:portname1/proxy/: foo (200; 12.960702ms)
Feb 11 14:23:50.373: INFO: (18) /api/v1/namespaces/proxy-6047/services/https:proxy-service-w84gx:tlsportname1/proxy/: tls baz (200; 13.18883ms)
Feb 11 14:23:50.380: INFO: (19) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:462/proxy/: tls qux (200; 7.163233ms)
Feb 11 14:23:50.381: INFO: (19) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:1080/proxy/: test<... (200; 7.197817ms)
Feb 11 14:23:50.381: INFO: (19) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8/proxy/: test (200; 7.764148ms)
Feb 11 14:23:50.381: INFO: (19) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 7.908314ms)
Feb 11 14:23:50.382: INFO: (19) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:460/proxy/: tls baz (200; 8.601916ms)
Feb 11 14:23:50.382: INFO: (19) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 9.02073ms)
Feb 11 14:23:50.383: INFO: (19) /api/v1/namespaces/proxy-6047/pods/http:proxy-service-w84gx-9qfm8:1080/proxy/: ... (200; 9.418071ms)
Feb 11 14:23:50.383: INFO: (19) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:162/proxy/: bar (200; 9.52693ms)
Feb 11 14:23:50.383: INFO: (19) /api/v1/namespaces/proxy-6047/pods/proxy-service-w84gx-9qfm8:160/proxy/: foo (200; 9.727012ms)
Feb 11 14:23:50.383: INFO: (19) /api/v1/namespaces/proxy-6047/pods/https:proxy-service-w84gx-9qfm8:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 11 14:24:12.930: INFO: Waiting up to 5m0s for pod "pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505" in namespace "emptydir-2410" to be "success or failure"
Feb 11 14:24:12.936: INFO: Pod "pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505": Phase="Pending", Reason="", readiness=false. Elapsed: 5.843809ms
Feb 11 14:24:14.997: INFO: Pod "pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066229046s
Feb 11 14:24:17.015: INFO: Pod "pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084625259s
Feb 11 14:24:19.033: INFO: Pod "pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102986664s
Feb 11 14:24:21.049: INFO: Pod "pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.118021747s
STEP: Saw pod success
Feb 11 14:24:21.049: INFO: Pod "pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505" satisfied condition "success or failure"
Feb 11 14:24:21.052: INFO: Trying to get logs from node iruya-node pod pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505 container test-container: 
STEP: delete the pod
Feb 11 14:24:21.121: INFO: Waiting for pod pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505 to disappear
Feb 11 14:24:21.147: INFO: Pod pod-d0f48ab8-0fd5-4f16-a00a-a01785ce7505 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:24:21.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2410" for this suite.
Feb 11 14:24:27.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:24:27.440: INFO: namespace emptydir-2410 deletion completed in 6.264389294s

• [SLOW TEST:14.709 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:24:27.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 11 14:24:27.557: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:24:42.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5347" for this suite.
Feb 11 14:24:48.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:24:48.615: INFO: namespace init-container-5347 deletion completed in 6.210339889s

• [SLOW TEST:21.174 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:24:48.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:24:48.801: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.205162ms)
Feb 11 14:24:48.809: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.293102ms)
Feb 11 14:24:48.815: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.973648ms)
Feb 11 14:24:48.824: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.029612ms)
Feb 11 14:24:48.832: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.892454ms)
Feb 11 14:24:48.839: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.951471ms)
Feb 11 14:24:48.847: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.040457ms)
Feb 11 14:24:48.907: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 60.121195ms)
Feb 11 14:24:48.916: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.748075ms)
Feb 11 14:24:48.923: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.271832ms)
Feb 11 14:24:48.931: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.948299ms)
Feb 11 14:24:48.936: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.970235ms)
Feb 11 14:24:48.940: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.748802ms)
Feb 11 14:24:48.944: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.5844ms)
Feb 11 14:24:48.948: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.107779ms)
Feb 11 14:24:48.952: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.6999ms)
Feb 11 14:24:48.956: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.149905ms)
Feb 11 14:24:48.961: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.615446ms)
Feb 11 14:24:48.969: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.687236ms)
Feb 11 14:24:48.975: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.859512ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:24:48.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9978" for this suite.
Feb 11 14:24:55.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:24:55.269: INFO: namespace proxy-9978 deletion completed in 6.289356795s

• [SLOW TEST:6.653 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:24:55.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 11 14:24:55.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7073 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 11 14:25:05.545: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0211 14:25:04.212388    1410 log.go:172] (0xc000102580) (0xc00074e8c0) Create stream\nI0211 14:25:04.212498    1410 log.go:172] (0xc000102580) (0xc00074e8c0) Stream added, broadcasting: 1\nI0211 14:25:04.219313    1410 log.go:172] (0xc000102580) Reply frame received for 1\nI0211 14:25:04.219377    1410 log.go:172] (0xc000102580) (0xc00074e960) Create stream\nI0211 14:25:04.219386    1410 log.go:172] (0xc000102580) (0xc00074e960) Stream added, broadcasting: 3\nI0211 14:25:04.221591    1410 log.go:172] (0xc000102580) Reply frame received for 3\nI0211 14:25:04.221640    1410 log.go:172] (0xc000102580) (0xc000226000) Create stream\nI0211 14:25:04.221652    1410 log.go:172] (0xc000102580) (0xc000226000) Stream added, broadcasting: 5\nI0211 14:25:04.222906    1410 log.go:172] (0xc000102580) Reply frame received for 5\nI0211 14:25:04.222924    1410 log.go:172] (0xc000102580) (0xc000228000) Create stream\nI0211 14:25:04.222930    1410 log.go:172] (0xc000102580) (0xc000228000) Stream added, broadcasting: 7\nI0211 14:25:04.224282    1410 log.go:172] (0xc000102580) Reply frame received for 7\nI0211 14:25:04.224413    1410 log.go:172] (0xc00074e960) (3) Writing data frame\nI0211 14:25:04.224556    1410 log.go:172] (0xc00074e960) (3) Writing data frame\nI0211 14:25:04.230466    1410 log.go:172] (0xc000102580) Data frame received for 5\nI0211 14:25:04.230491    1410 log.go:172] (0xc000226000) (5) Data frame handling\nI0211 14:25:04.230526    1410 log.go:172] (0xc000226000) (5) Data frame sent\nI0211 14:25:04.233375    1410 log.go:172] (0xc000102580) Data frame received for 5\nI0211 14:25:04.233387    1410 log.go:172] (0xc000226000) (5) Data frame handling\nI0211 14:25:04.233394    1410 log.go:172] (0xc000226000) (5) Data frame sent\nI0211 14:25:05.488166    1410 log.go:172] (0xc000102580) Data frame received for 1\nI0211 14:25:05.488229    1410 log.go:172] (0xc00074e8c0) (1) Data frame handling\nI0211 14:25:05.488251    1410 log.go:172] (0xc00074e8c0) (1) Data frame sent\nI0211 14:25:05.488271    1410 log.go:172] (0xc000102580) (0xc00074e8c0) Stream removed, broadcasting: 1\nI0211 14:25:05.492161    1410 log.go:172] (0xc000102580) (0xc00074e960) Stream removed, broadcasting: 3\nI0211 14:25:05.492363    1410 log.go:172] (0xc000102580) (0xc000226000) Stream removed, broadcasting: 5\nI0211 14:25:05.492674    1410 log.go:172] (0xc000102580) (0xc000228000) Stream removed, broadcasting: 7\nI0211 14:25:05.492731    1410 log.go:172] (0xc000102580) (0xc00074e8c0) Stream removed, broadcasting: 1\nI0211 14:25:05.492749    1410 log.go:172] (0xc000102580) (0xc00074e960) Stream removed, broadcasting: 3\nI0211 14:25:05.492764    1410 log.go:172] (0xc000102580) (0xc000226000) Stream removed, broadcasting: 5\nI0211 14:25:05.492772    1410 log.go:172] (0xc000102580) (0xc000228000) Stream removed, broadcasting: 7\n"
Feb 11 14:25:05.545: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:25:07.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7073" for this suite.
Feb 11 14:25:13.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:25:13.831: INFO: namespace kubectl-7073 deletion completed in 6.261041869s

• [SLOW TEST:18.562 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:25:13.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8c2f1394-4a8d-401b-97fb-ae0807a2701a
STEP: Creating a pod to test consume configMaps
Feb 11 14:25:14.027: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179" in namespace "projected-3075" to be "success or failure"
Feb 11 14:25:14.078: INFO: Pod "pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179": Phase="Pending", Reason="", readiness=false. Elapsed: 50.433571ms
Feb 11 14:25:16.097: INFO: Pod "pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069063741s
Feb 11 14:25:18.348: INFO: Pod "pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320557507s
Feb 11 14:25:20.395: INFO: Pod "pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367366635s
Feb 11 14:25:22.401: INFO: Pod "pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.373081846s
STEP: Saw pod success
Feb 11 14:25:22.401: INFO: Pod "pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179" satisfied condition "success or failure"
Feb 11 14:25:22.403: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 14:25:22.526: INFO: Waiting for pod pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179 to disappear
Feb 11 14:25:22.535: INFO: Pod pod-projected-configmaps-18538122-9be6-4223-8e88-01084eb53179 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:25:22.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3075" for this suite.
Feb 11 14:25:28.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:25:28.685: INFO: namespace projected-3075 deletion completed in 6.144218536s

• [SLOW TEST:14.853 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:25:28.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:25:28.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619" in namespace "projected-7528" to be "success or failure"
Feb 11 14:25:28.880: INFO: Pod "downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619": Phase="Pending", Reason="", readiness=false. Elapsed: 107.85286ms
Feb 11 14:25:30.895: INFO: Pod "downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12233821s
Feb 11 14:25:32.907: INFO: Pod "downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134444539s
Feb 11 14:25:34.918: INFO: Pod "downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145490646s
Feb 11 14:25:36.930: INFO: Pod "downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157312112s
STEP: Saw pod success
Feb 11 14:25:36.930: INFO: Pod "downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619" satisfied condition "success or failure"
Feb 11 14:25:36.934: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619 container client-container: 
STEP: delete the pod
Feb 11 14:25:37.124: INFO: Waiting for pod downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619 to disappear
Feb 11 14:25:37.133: INFO: Pod downwardapi-volume-f3297a73-5b26-45a7-851b-fa6f24e21619 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:25:37.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7528" for this suite.
Feb 11 14:25:43.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:25:43.423: INFO: namespace projected-7528 deletion completed in 6.280909176s

• [SLOW TEST:14.737 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:25:43.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 11 14:25:43.503: INFO: Waiting up to 5m0s for pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba" in namespace "emptydir-6088" to be "success or failure"
Feb 11 14:25:43.509: INFO: Pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343115ms
Feb 11 14:25:45.523: INFO: Pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01983421s
Feb 11 14:25:47.540: INFO: Pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03664002s
Feb 11 14:25:49.550: INFO: Pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04735553s
Feb 11 14:25:51.593: INFO: Pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09022941s
Feb 11 14:25:53.609: INFO: Pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105815313s
STEP: Saw pod success
Feb 11 14:25:53.609: INFO: Pod "pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba" satisfied condition "success or failure"
Feb 11 14:25:53.618: INFO: Trying to get logs from node iruya-node pod pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba container test-container: 
STEP: delete the pod
Feb 11 14:25:53.738: INFO: Waiting for pod pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba to disappear
Feb 11 14:25:53.785: INFO: Pod pod-70e058bb-4d30-4dd0-85f0-23d86d7c8fba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:25:53.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6088" for this suite.
Feb 11 14:25:59.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:25:59.955: INFO: namespace emptydir-6088 deletion completed in 6.160595744s

• [SLOW TEST:16.532 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:25:59.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-8640/secret-test-bdabf41d-b164-456d-bc90-5b7d56ffdda0
STEP: Creating a pod to test consume secrets
Feb 11 14:26:00.082: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39" in namespace "secrets-8640" to be "success or failure"
Feb 11 14:26:00.099: INFO: Pod "pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39": Phase="Pending", Reason="", readiness=false. Elapsed: 17.244633ms
Feb 11 14:26:02.110: INFO: Pod "pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027698157s
Feb 11 14:26:04.121: INFO: Pod "pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038859773s
Feb 11 14:26:06.141: INFO: Pod "pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058674587s
Feb 11 14:26:08.150: INFO: Pod "pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067484953s
STEP: Saw pod success
Feb 11 14:26:08.150: INFO: Pod "pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39" satisfied condition "success or failure"
Feb 11 14:26:08.157: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39 container env-test: 
STEP: delete the pod
Feb 11 14:26:08.408: INFO: Waiting for pod pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39 to disappear
Feb 11 14:26:08.420: INFO: Pod pod-configmaps-e9146626-c218-40d4-80a2-c6f6c6622d39 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:26:08.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8640" for this suite.
Feb 11 14:26:14.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:26:14.725: INFO: namespace secrets-8640 deletion completed in 6.291891573s

• [SLOW TEST:14.770 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:26:14.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 11 14:26:14.857: INFO: Waiting up to 5m0s for pod "var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246" in namespace "var-expansion-38" to be "success or failure"
Feb 11 14:26:14.871: INFO: Pod "var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246": Phase="Pending", Reason="", readiness=false. Elapsed: 13.41158ms
Feb 11 14:26:16.879: INFO: Pod "var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021255266s
Feb 11 14:26:18.888: INFO: Pod "var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030503888s
Feb 11 14:26:21.441: INFO: Pod "var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583756629s
Feb 11 14:26:23.493: INFO: Pod "var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.635550187s
STEP: Saw pod success
Feb 11 14:26:23.493: INFO: Pod "var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246" satisfied condition "success or failure"
Feb 11 14:26:23.545: INFO: Trying to get logs from node iruya-node pod var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246 container dapi-container: 
STEP: delete the pod
Feb 11 14:26:23.788: INFO: Waiting for pod var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246 to disappear
Feb 11 14:26:23.832: INFO: Pod var-expansion-b369e3a9-0043-4743-b0b4-e084137e6246 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:26:23.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-38" for this suite.
Feb 11 14:26:29.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:26:30.060: INFO: namespace var-expansion-38 deletion completed in 6.211360056s

• [SLOW TEST:15.334 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:26:30.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:26:30.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478" in namespace "projected-5322" to be "success or failure"
Feb 11 14:26:30.224: INFO: Pod "downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478": Phase="Pending", Reason="", readiness=false. Elapsed: 7.070524ms
Feb 11 14:26:32.229: INFO: Pod "downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013015193s
Feb 11 14:26:34.314: INFO: Pod "downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097718953s
Feb 11 14:26:36.328: INFO: Pod "downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111070319s
Feb 11 14:26:38.340: INFO: Pod "downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123191415s
STEP: Saw pod success
Feb 11 14:26:38.340: INFO: Pod "downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478" satisfied condition "success or failure"
Feb 11 14:26:38.344: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478 container client-container: 
STEP: delete the pod
Feb 11 14:26:38.443: INFO: Waiting for pod downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478 to disappear
Feb 11 14:26:38.448: INFO: Pod downwardapi-volume-576586ed-8eab-42a9-89ef-079f53d53478 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:26:38.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5322" for this suite.
Feb 11 14:26:44.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:26:44.659: INFO: namespace projected-5322 deletion completed in 6.204947818s

• [SLOW TEST:14.599 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:26:44.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7664
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 14:26:44.798: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 11 14:27:21.312: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7664 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 14:27:21.313: INFO: >>> kubeConfig: /root/.kube/config
I0211 14:27:21.410363       9 log.go:172] (0xc00141e9a0) (0xc002683d60) Create stream
I0211 14:27:21.410719       9 log.go:172] (0xc00141e9a0) (0xc002683d60) Stream added, broadcasting: 1
I0211 14:27:21.428199       9 log.go:172] (0xc00141e9a0) Reply frame received for 1
I0211 14:27:21.428355       9 log.go:172] (0xc00141e9a0) (0xc00226ec80) Create stream
I0211 14:27:21.428374       9 log.go:172] (0xc00141e9a0) (0xc00226ec80) Stream added, broadcasting: 3
I0211 14:27:21.431755       9 log.go:172] (0xc00141e9a0) Reply frame received for 3
I0211 14:27:21.431808       9 log.go:172] (0xc00141e9a0) (0xc002683e00) Create stream
I0211 14:27:21.431827       9 log.go:172] (0xc00141e9a0) (0xc002683e00) Stream added, broadcasting: 5
I0211 14:27:21.434139       9 log.go:172] (0xc00141e9a0) Reply frame received for 5
I0211 14:27:21.608270       9 log.go:172] (0xc00141e9a0) Data frame received for 3
I0211 14:27:21.608501       9 log.go:172] (0xc00226ec80) (3) Data frame handling
I0211 14:27:21.608570       9 log.go:172] (0xc00226ec80) (3) Data frame sent
I0211 14:27:21.754445       9 log.go:172] (0xc00141e9a0) Data frame received for 1
I0211 14:27:21.754684       9 log.go:172] (0xc00141e9a0) (0xc002683e00) Stream removed, broadcasting: 5
I0211 14:27:21.754807       9 log.go:172] (0xc002683d60) (1) Data frame handling
I0211 14:27:21.754889       9 log.go:172] (0xc002683d60) (1) Data frame sent
I0211 14:27:21.754938       9 log.go:172] (0xc00141e9a0) (0xc00226ec80) Stream removed, broadcasting: 3
I0211 14:27:21.755349       9 log.go:172] (0xc00141e9a0) (0xc002683d60) Stream removed, broadcasting: 1
I0211 14:27:21.755635       9 log.go:172] (0xc00141e9a0) Go away received
I0211 14:27:21.756291       9 log.go:172] (0xc00141e9a0) (0xc002683d60) Stream removed, broadcasting: 1
I0211 14:27:21.756386       9 log.go:172] (0xc00141e9a0) (0xc00226ec80) Stream removed, broadcasting: 3
I0211 14:27:21.756404       9 log.go:172] (0xc00141e9a0) (0xc002683e00) Stream removed, broadcasting: 5
Feb 11 14:27:21.756: INFO: Found all expected endpoints: [netserver-0]
Feb 11 14:27:21.766: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7664 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 14:27:21.766: INFO: >>> kubeConfig: /root/.kube/config
I0211 14:27:21.832584       9 log.go:172] (0xc002fa0d10) (0xc00226f040) Create stream
I0211 14:27:21.832726       9 log.go:172] (0xc002fa0d10) (0xc00226f040) Stream added, broadcasting: 1
I0211 14:27:21.842275       9 log.go:172] (0xc002fa0d10) Reply frame received for 1
I0211 14:27:21.842354       9 log.go:172] (0xc002fa0d10) (0xc002683f40) Create stream
I0211 14:27:21.842376       9 log.go:172] (0xc002fa0d10) (0xc002683f40) Stream added, broadcasting: 3
I0211 14:27:21.847737       9 log.go:172] (0xc002fa0d10) Reply frame received for 3
I0211 14:27:21.847937       9 log.go:172] (0xc002fa0d10) (0xc00226f0e0) Create stream
I0211 14:27:21.847953       9 log.go:172] (0xc002fa0d10) (0xc00226f0e0) Stream added, broadcasting: 5
I0211 14:27:21.853143       9 log.go:172] (0xc002fa0d10) Reply frame received for 5
I0211 14:27:22.024574       9 log.go:172] (0xc002fa0d10) Data frame received for 3
I0211 14:27:22.024679       9 log.go:172] (0xc002683f40) (3) Data frame handling
I0211 14:27:22.024724       9 log.go:172] (0xc002683f40) (3) Data frame sent
I0211 14:27:22.206936       9 log.go:172] (0xc002fa0d10) (0xc002683f40) Stream removed, broadcasting: 3
I0211 14:27:22.207212       9 log.go:172] (0xc002fa0d10) Data frame received for 1
I0211 14:27:22.207244       9 log.go:172] (0xc00226f040) (1) Data frame handling
I0211 14:27:22.207285       9 log.go:172] (0xc00226f040) (1) Data frame sent
I0211 14:27:22.207306       9 log.go:172] (0xc002fa0d10) (0xc00226f040) Stream removed, broadcasting: 1
I0211 14:27:22.207521       9 log.go:172] (0xc002fa0d10) (0xc00226f0e0) Stream removed, broadcasting: 5
I0211 14:27:22.207789       9 log.go:172] (0xc002fa0d10) Go away received
I0211 14:27:22.207921       9 log.go:172] (0xc002fa0d10) (0xc00226f040) Stream removed, broadcasting: 1
I0211 14:27:22.207942       9 log.go:172] (0xc002fa0d10) (0xc002683f40) Stream removed, broadcasting: 3
I0211 14:27:22.207957       9 log.go:172] (0xc002fa0d10) (0xc00226f0e0) Stream removed, broadcasting: 5
Feb 11 14:27:22.208: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:27:22.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7664" for this suite.
Feb 11 14:27:46.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:27:46.411: INFO: namespace pod-network-test-7664 deletion completed in 24.186270664s

• [SLOW TEST:61.751 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:27:46.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fc521217-a4b6-467c-89ee-f15eb20addb7
STEP: Creating a pod to test consume secrets
Feb 11 14:27:46.617: INFO: Waiting up to 5m0s for pod "pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a" in namespace "secrets-1607" to be "success or failure"
Feb 11 14:27:46.669: INFO: Pod "pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.792935ms
Feb 11 14:27:48.691: INFO: Pod "pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073718272s
Feb 11 14:27:50.703: INFO: Pod "pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08629318s
Feb 11 14:27:52.712: INFO: Pod "pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09514947s
Feb 11 14:27:54.725: INFO: Pod "pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108564515s
STEP: Saw pod success
Feb 11 14:27:54.726: INFO: Pod "pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a" satisfied condition "success or failure"
Feb 11 14:27:54.730: INFO: Trying to get logs from node iruya-node pod pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a container secret-volume-test: 
STEP: delete the pod
Feb 11 14:27:54.847: INFO: Waiting for pod pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a to disappear
Feb 11 14:27:54.878: INFO: Pod pod-secrets-80452933-e1f8-4af9-90de-a0362a02a80a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:27:54.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1607" for this suite.
Feb 11 14:28:01.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:28:01.182: INFO: namespace secrets-1607 deletion completed in 6.281216999s

• [SLOW TEST:14.772 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:28:01.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-9a4aca3e-f31e-42b3-8841-369ab0d21210
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:28:13.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6912" for this suite.
Feb 11 14:28:35.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:28:36.092: INFO: namespace configmap-6912 deletion completed in 22.154130461s

• [SLOW TEST:34.908 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:28:36.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 11 14:28:45.274: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:28:46.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2376" for this suite.
Feb 11 14:29:10.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:29:10.495: INFO: namespace replicaset-2376 deletion completed in 24.167342076s

• [SLOW TEST:34.403 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:29:10.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:29:10.648: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd" in namespace "projected-1443" to be "success or failure"
Feb 11 14:29:10.659: INFO: Pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183854ms
Feb 11 14:29:12.676: INFO: Pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027295401s
Feb 11 14:29:14.687: INFO: Pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038163526s
Feb 11 14:29:16.712: INFO: Pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063165834s
Feb 11 14:29:18.728: INFO: Pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079559087s
Feb 11 14:29:20.745: INFO: Pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095908848s
STEP: Saw pod success
Feb 11 14:29:20.745: INFO: Pod "downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd" satisfied condition "success or failure"
Feb 11 14:29:20.750: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd container client-container: 
STEP: delete the pod
Feb 11 14:29:20.812: INFO: Waiting for pod downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd to disappear
Feb 11 14:29:20.826: INFO: Pod downwardapi-volume-0c169b52-bff3-481b-881e-57231c23c4cd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:29:20.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1443" for this suite.
Feb 11 14:29:26.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:29:27.064: INFO: namespace projected-1443 deletion completed in 6.229958655s

• [SLOW TEST:16.567 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:29:27.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 11 14:29:27.161: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5034,SelfLink:/api/v1/namespaces/watch-5034/configmaps/e2e-watch-test-watch-closed,UID:8373d2c4-a178-446f-a16f-5610650137c9,ResourceVersion:23957611,Generation:0,CreationTimestamp:2020-02-11 14:29:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 11 14:29:27.162: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5034,SelfLink:/api/v1/namespaces/watch-5034/configmaps/e2e-watch-test-watch-closed,UID:8373d2c4-a178-446f-a16f-5610650137c9,ResourceVersion:23957612,Generation:0,CreationTimestamp:2020-02-11 14:29:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 11 14:29:27.241: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5034,SelfLink:/api/v1/namespaces/watch-5034/configmaps/e2e-watch-test-watch-closed,UID:8373d2c4-a178-446f-a16f-5610650137c9,ResourceVersion:23957613,Generation:0,CreationTimestamp:2020-02-11 14:29:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 11 14:29:27.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5034,SelfLink:/api/v1/namespaces/watch-5034/configmaps/e2e-watch-test-watch-closed,UID:8373d2c4-a178-446f-a16f-5610650137c9,ResourceVersion:23957614,Generation:0,CreationTimestamp:2020-02-11 14:29:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:29:27.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5034" for this suite.
Feb 11 14:29:33.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:29:33.414: INFO: namespace watch-5034 deletion completed in 6.167056376s

• [SLOW TEST:6.350 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:29:33.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-5048/configmap-test-27b3e93f-fa1d-45fc-8514-3ef100713fd1
STEP: Creating a pod to test consume configMaps
Feb 11 14:29:33.678: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353" in namespace "configmap-5048" to be "success or failure"
Feb 11 14:29:33.685: INFO: Pod "pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167206ms
Feb 11 14:29:35.722: INFO: Pod "pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043501082s
Feb 11 14:29:37.733: INFO: Pod "pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053707058s
Feb 11 14:29:39.743: INFO: Pod "pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063964892s
Feb 11 14:29:41.810: INFO: Pod "pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131494037s
STEP: Saw pod success
Feb 11 14:29:41.811: INFO: Pod "pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353" satisfied condition "success or failure"
Feb 11 14:29:41.826: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353 container env-test: 
STEP: delete the pod
Feb 11 14:29:41.955: INFO: Waiting for pod pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353 to disappear
Feb 11 14:29:41.965: INFO: Pod pod-configmaps-8a82d7af-71e9-4695-a06d-6b1c9691d353 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:29:41.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5048" for this suite.
Feb 11 14:29:48.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:29:48.263: INFO: namespace configmap-5048 deletion completed in 6.288820329s

• [SLOW TEST:14.848 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:29:48.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:29:48.460: INFO: Creating ReplicaSet my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832
Feb 11 14:29:48.610: INFO: Pod name my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832: Found 0 pods out of 1
Feb 11 14:29:53.750: INFO: Pod name my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832: Found 1 pods out of 1
Feb 11 14:29:53.751: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832" is running
Feb 11 14:29:55.779: INFO: Pod "my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832-24qsp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 14:29:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 14:29:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 14:29:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 14:29:48 +0000 UTC Reason: Message:}])
Feb 11 14:29:55.779: INFO: Trying to dial the pod
Feb 11 14:30:00.826: INFO: Controller my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832: Got expected result from replica 1 [my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832-24qsp]: "my-hostname-basic-a7e0f056-27b3-4fb5-95e9-02a662637832-24qsp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:30:00.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5544" for this suite.
Feb 11 14:30:08.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:30:09.021: INFO: namespace replicaset-5544 deletion completed in 8.187950794s

• [SLOW TEST:20.758 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:30:09.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:30:09.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:30:19.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3899" for this suite.
Feb 11 14:31:11.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:31:12.099: INFO: namespace pods-3899 deletion completed in 52.254746199s

• [SLOW TEST:63.075 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:31:12.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5635
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5635
STEP: Creating statefulset with conflicting port in namespace statefulset-5635
STEP: Waiting until pod test-pod will start running in namespace statefulset-5635
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5635
Feb 11 14:31:24.317: INFO: Observed stateful pod in namespace: statefulset-5635, name: ss-0, uid: e2cf8911-04a5-4068-a072-055b7da1bf43, status phase: Pending. Waiting for statefulset controller to delete.
Feb 11 14:31:26.497: INFO: Observed stateful pod in namespace: statefulset-5635, name: ss-0, uid: e2cf8911-04a5-4068-a072-055b7da1bf43, status phase: Failed. Waiting for statefulset controller to delete.
Feb 11 14:31:26.514: INFO: Observed stateful pod in namespace: statefulset-5635, name: ss-0, uid: e2cf8911-04a5-4068-a072-055b7da1bf43, status phase: Failed. Waiting for statefulset controller to delete.
Feb 11 14:31:26.539: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5635
STEP: Removing pod with conflicting port in namespace statefulset-5635
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5635 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 11 14:31:38.727: INFO: Deleting all statefulset in ns statefulset-5635
Feb 11 14:31:38.734: INFO: Scaling statefulset ss to 0
Feb 11 14:31:48.774: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:31:48.784: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:31:48.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5635" for this suite.
Feb 11 14:31:56.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:31:57.059: INFO: namespace statefulset-5635 deletion completed in 8.233632279s

• [SLOW TEST:44.959 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:31:57.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:31:57.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12" in namespace "downward-api-281" to be "success or failure"
Feb 11 14:31:57.189: INFO: Pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12": Phase="Pending", Reason="", readiness=false. Elapsed: 52.718523ms
Feb 11 14:31:59.315: INFO: Pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177830824s
Feb 11 14:32:01.329: INFO: Pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192648388s
Feb 11 14:32:03.349: INFO: Pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212747488s
Feb 11 14:32:05.364: INFO: Pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227466507s
Feb 11 14:32:07.374: INFO: Pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.237433896s
STEP: Saw pod success
Feb 11 14:32:07.374: INFO: Pod "downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12" satisfied condition "success or failure"
Feb 11 14:32:07.379: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12 container client-container: 
STEP: delete the pod
Feb 11 14:32:07.477: INFO: Waiting for pod downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12 to disappear
Feb 11 14:32:07.485: INFO: Pod downwardapi-volume-79c75745-a2dd-4ea0-a984-5a8f1bff9d12 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:32:07.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-281" for this suite.
Feb 11 14:32:13.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:32:13.677: INFO: namespace downward-api-281 deletion completed in 6.183040105s

• [SLOW TEST:16.618 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:32:13.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:32:21.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5077" for this suite.
Feb 11 14:33:08.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:33:08.172: INFO: namespace kubelet-test-5077 deletion completed in 46.173807881s

• [SLOW TEST:54.493 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:33:08.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-68cf7b07-14a5-4b4c-85d0-2408c8127b56 in namespace container-probe-8023
Feb 11 14:33:16.351: INFO: Started pod busybox-68cf7b07-14a5-4b4c-85d0-2408c8127b56 in namespace container-probe-8023
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 14:33:16.357: INFO: Initial restart count of pod busybox-68cf7b07-14a5-4b4c-85d0-2408c8127b56 is 0
Feb 11 14:34:08.790: INFO: Restart count of pod container-probe-8023/busybox-68cf7b07-14a5-4b4c-85d0-2408c8127b56 is now 1 (52.432292329s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:34:08.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8023" for this suite.
Feb 11 14:34:15.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:34:15.332: INFO: namespace container-probe-8023 deletion completed in 6.496843655s

• [SLOW TEST:67.159 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:34:15.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:34:15.403: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 11 14:34:15.504: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 11 14:34:20.522: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 11 14:34:24.565: INFO: Creating deployment "test-rolling-update-deployment"
Feb 11 14:34:24.592: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 11 14:34:24.604: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 11 14:34:26.616: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 11 14:34:26.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 14:34:28.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 14:34:30.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717028464, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 14:34:32.641: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 11 14:34:32.675: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9326,SelfLink:/apis/apps/v1/namespaces/deployment-9326/deployments/test-rolling-update-deployment,UID:0f9869da-f9bf-4242-84d4-2a183a1c4bb8,ResourceVersion:23958416,Generation:1,CreationTimestamp:2020-02-11 14:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-11 14:34:24 +0000 UTC 2020-02-11 14:34:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-11 14:34:31 +0000 UTC 2020-02-11 14:34:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 11 14:34:32.681: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9326,SelfLink:/apis/apps/v1/namespaces/deployment-9326/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:156254fe-0987-4ecb-a608-3e1db66a2e3a,ResourceVersion:23958405,Generation:1,CreationTimestamp:2020-02-11 14:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0f9869da-f9bf-4242-84d4-2a183a1c4bb8 0xc000520df7 0xc000520df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 11 14:34:32.681: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 11 14:34:32.682: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9326,SelfLink:/apis/apps/v1/namespaces/deployment-9326/replicasets/test-rolling-update-controller,UID:36fa56cb-7ca0-4bef-91ef-c878a418f360,ResourceVersion:23958415,Generation:2,CreationTimestamp:2020-02-11 14:34:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0f9869da-f9bf-4242-84d4-2a183a1c4bb8 0xc0005205c7 0xc0005205c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 11 14:34:32.687: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-rs9w8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-rs9w8,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9326,SelfLink:/api/v1/namespaces/deployment-9326/pods/test-rolling-update-deployment-79f6b9d75c-rs9w8,UID:0839606d-3033-4bdb-ae77-7695c3998ad9,ResourceVersion:23958404,Generation:0,CreationTimestamp:2020-02-11 14:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 156254fe-0987-4ecb-a608-3e1db66a2e3a 0xc0033317d7 0xc0033317d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4d9jk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4d9jk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4d9jk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003331850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003331870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:34:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:34:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:34:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:34:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-11 14:34:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-11 14:34:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6a8f745e0e30106d97bd6499c620018f9a84ecc671770bbe1949970184e05925}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:34:32.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9326" for this suite.
Feb 11 14:34:38.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:34:38.865: INFO: namespace deployment-9326 deletion completed in 6.173042983s

• [SLOW TEST:23.532 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:34:38.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:34:39.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b" in namespace "downward-api-5269" to be "success or failure"
Feb 11 14:34:39.082: INFO: Pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.084447ms
Feb 11 14:34:41.094: INFO: Pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025209948s
Feb 11 14:34:43.111: INFO: Pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042349791s
Feb 11 14:34:45.120: INFO: Pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051217139s
Feb 11 14:34:47.127: INFO: Pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058914035s
Feb 11 14:34:49.137: INFO: Pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068433625s
STEP: Saw pod success
Feb 11 14:34:49.137: INFO: Pod "downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b" satisfied condition "success or failure"
Feb 11 14:34:49.142: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b container client-container: 
STEP: delete the pod
Feb 11 14:34:49.247: INFO: Waiting for pod downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b to disappear
Feb 11 14:34:49.259: INFO: Pod downwardapi-volume-cf165a9b-9d90-492a-bcb7-18b346af1d1b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:34:49.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5269" for this suite.
Feb 11 14:34:55.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:34:55.425: INFO: namespace downward-api-5269 deletion completed in 6.155994861s

• [SLOW TEST:16.559 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:34:55.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-21a62d1a-9cf6-4200-af22-180f5688523d
STEP: Creating a pod to test consume secrets
Feb 11 14:34:55.638: INFO: Waiting up to 5m0s for pod "pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33" in namespace "secrets-7618" to be "success or failure"
Feb 11 14:34:55.655: INFO: Pod "pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33": Phase="Pending", Reason="", readiness=false. Elapsed: 17.544368ms
Feb 11 14:34:57.668: INFO: Pod "pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030407451s
Feb 11 14:34:59.676: INFO: Pod "pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038622972s
Feb 11 14:35:01.689: INFO: Pod "pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051089989s
Feb 11 14:35:03.700: INFO: Pod "pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061851628s
STEP: Saw pod success
Feb 11 14:35:03.700: INFO: Pod "pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33" satisfied condition "success or failure"
Feb 11 14:35:03.704: INFO: Trying to get logs from node iruya-node pod pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33 container secret-volume-test: 
STEP: delete the pod
Feb 11 14:35:03.887: INFO: Waiting for pod pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33 to disappear
Feb 11 14:35:03.905: INFO: Pod pod-secrets-e628ffe4-03b6-4e9c-a7f3-d1dc275dbb33 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:35:03.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7618" for this suite.
Feb 11 14:35:09.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:35:10.117: INFO: namespace secrets-7618 deletion completed in 6.202839719s

• [SLOW TEST:14.691 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:35:10.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-n4t2
STEP: Creating a pod to test atomic-volume-subpath
Feb 11 14:35:10.318: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n4t2" in namespace "subpath-4330" to be "success or failure"
Feb 11 14:35:10.336: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.340592ms
Feb 11 14:35:12.354: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036115921s
Feb 11 14:35:14.367: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049000173s
Feb 11 14:35:16.374: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056004577s
Feb 11 14:35:18.387: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 8.068755442s
Feb 11 14:35:20.395: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 10.07712446s
Feb 11 14:35:22.406: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 12.088293184s
Feb 11 14:35:24.415: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 14.09739939s
Feb 11 14:35:26.427: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 16.109514859s
Feb 11 14:35:28.437: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 18.119158919s
Feb 11 14:35:30.457: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 20.139191499s
Feb 11 14:35:32.859: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 22.54067903s
Feb 11 14:35:34.871: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 24.552989809s
Feb 11 14:35:36.881: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Running", Reason="", readiness=true. Elapsed: 26.563385142s
Feb 11 14:35:38.926: INFO: Pod "pod-subpath-test-secret-n4t2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.608269498s
STEP: Saw pod success
Feb 11 14:35:38.927: INFO: Pod "pod-subpath-test-secret-n4t2" satisfied condition "success or failure"
Feb 11 14:35:38.932: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-n4t2 container test-container-subpath-secret-n4t2: 
STEP: delete the pod
Feb 11 14:35:38.989: INFO: Waiting for pod pod-subpath-test-secret-n4t2 to disappear
Feb 11 14:35:38.993: INFO: Pod pod-subpath-test-secret-n4t2 no longer exists
STEP: Deleting pod pod-subpath-test-secret-n4t2
Feb 11 14:35:38.993: INFO: Deleting pod "pod-subpath-test-secret-n4t2" in namespace "subpath-4330"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:35:39.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4330" for this suite.
Feb 11 14:35:45.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:35:45.293: INFO: namespace subpath-4330 deletion completed in 6.286579792s

• [SLOW TEST:35.175 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:35:45.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:35:45.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3958'
Feb 11 14:35:47.931: INFO: stderr: ""
Feb 11 14:35:47.932: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 11 14:35:47.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3958'
Feb 11 14:35:48.744: INFO: stderr: ""
Feb 11 14:35:48.744: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 11 14:35:49.761: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:49.762: INFO: Found 0 / 1
Feb 11 14:35:50.763: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:50.764: INFO: Found 0 / 1
Feb 11 14:35:51.756: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:51.756: INFO: Found 0 / 1
Feb 11 14:35:52.756: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:52.757: INFO: Found 0 / 1
Feb 11 14:35:53.760: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:53.761: INFO: Found 0 / 1
Feb 11 14:35:54.800: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:54.801: INFO: Found 0 / 1
Feb 11 14:35:55.753: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:55.753: INFO: Found 0 / 1
Feb 11 14:35:56.756: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:56.757: INFO: Found 1 / 1
Feb 11 14:35:56.757: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 11 14:35:56.769: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:35:56.769: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 11 14:35:56.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-hclwl --namespace=kubectl-3958'
Feb 11 14:35:56.965: INFO: stderr: ""
Feb 11 14:35:56.965: INFO: stdout: "Name:           redis-master-hclwl\nNamespace:      kubectl-3958\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Tue, 11 Feb 2020 14:35:48 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://5b9772e070602c88c638c66f6e7e01d3398f5c423ae14b24ec4a5f9d732e564f\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 11 Feb 2020 14:35:55 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dwpl9 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-dwpl9:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-dwpl9\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-3958/redis-master-hclwl to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb 11 14:35:56.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3958'
Feb 11 14:35:57.134: INFO: stderr: ""
Feb 11 14:35:57.134: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-3958\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-hclwl\n"
Feb 11 14:35:57.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3958'
Feb 11 14:35:57.265: INFO: stderr: ""
Feb 11 14:35:57.265: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-3958\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.55.207\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 11 14:35:57.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 11 14:35:57.410: INFO: stderr: ""
Feb 11 14:35:57.410: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 11 Feb 2020 14:35:14 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 11 Feb 2020 14:35:14 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 11 Feb 2020 14:35:14 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 11 Feb 2020 14:35:14 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         191d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         122d\n  kubectl-3958               redis-master-hclwl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 11 14:35:57.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3958'
Feb 11 14:35:57.557: INFO: stderr: ""
Feb 11 14:35:57.557: INFO: stdout: "Name:         kubectl-3958\nLabels:       e2e-framework=kubectl\n              e2e-run=1722da6f-c945-4f1f-94c0-e7d38bbc7010\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:35:57.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3958" for this suite.
Feb 11 14:36:19.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:36:19.753: INFO: namespace kubectl-3958 deletion completed in 22.18931371s

• [SLOW TEST:34.460 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:36:19.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 11 14:36:28.066: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:36:28.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-850" for this suite.
Feb 11 14:36:34.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:36:34.308: INFO: namespace container-runtime-850 deletion completed in 6.157166121s

• [SLOW TEST:14.555 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:36:34.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 11 14:36:34.452: INFO: Waiting up to 5m0s for pod "client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418" in namespace "containers-1293" to be "success or failure"
Feb 11 14:36:34.462: INFO: Pod "client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418": Phase="Pending", Reason="", readiness=false. Elapsed: 9.506824ms
Feb 11 14:36:36.477: INFO: Pod "client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024842219s
Feb 11 14:36:38.494: INFO: Pod "client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04100097s
Feb 11 14:36:40.511: INFO: Pod "client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058900325s
Feb 11 14:36:42.531: INFO: Pod "client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07824585s
STEP: Saw pod success
Feb 11 14:36:42.531: INFO: Pod "client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418" satisfied condition "success or failure"
Feb 11 14:36:42.536: INFO: Trying to get logs from node iruya-node pod client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418 container test-container: 
STEP: delete the pod
Feb 11 14:36:42.599: INFO: Waiting for pod client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418 to disappear
Feb 11 14:36:42.665: INFO: Pod client-containers-767a4e1c-1c35-4a83-9bf2-aa5f25ae5418 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:36:42.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1293" for this suite.
Feb 11 14:36:48.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:36:48.807: INFO: namespace containers-1293 deletion completed in 6.132909523s

• [SLOW TEST:14.498 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:36:48.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 11 14:36:48.910: INFO: Waiting up to 5m0s for pod "pod-163440f2-d4a9-4a40-baed-4d770e80eb0c" in namespace "emptydir-5442" to be "success or failure"
Feb 11 14:36:48.969: INFO: Pod "pod-163440f2-d4a9-4a40-baed-4d770e80eb0c": Phase="Pending", Reason="", readiness=false. Elapsed: 59.464894ms
Feb 11 14:36:50.989: INFO: Pod "pod-163440f2-d4a9-4a40-baed-4d770e80eb0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079280975s
Feb 11 14:36:52.997: INFO: Pod "pod-163440f2-d4a9-4a40-baed-4d770e80eb0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087247317s
Feb 11 14:36:55.018: INFO: Pod "pod-163440f2-d4a9-4a40-baed-4d770e80eb0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108141866s
Feb 11 14:36:57.026: INFO: Pod "pod-163440f2-d4a9-4a40-baed-4d770e80eb0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115741136s
STEP: Saw pod success
Feb 11 14:36:57.026: INFO: Pod "pod-163440f2-d4a9-4a40-baed-4d770e80eb0c" satisfied condition "success or failure"
Feb 11 14:36:57.029: INFO: Trying to get logs from node iruya-node pod pod-163440f2-d4a9-4a40-baed-4d770e80eb0c container test-container: 
STEP: delete the pod
Feb 11 14:36:57.328: INFO: Waiting for pod pod-163440f2-d4a9-4a40-baed-4d770e80eb0c to disappear
Feb 11 14:36:57.338: INFO: Pod pod-163440f2-d4a9-4a40-baed-4d770e80eb0c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:36:57.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5442" for this suite.
Feb 11 14:37:03.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:37:03.558: INFO: namespace emptydir-5442 deletion completed in 6.199201358s

• [SLOW TEST:14.751 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:37:03.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 11 14:37:03.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3259'
Feb 11 14:37:03.906: INFO: stderr: ""
Feb 11 14:37:03.906: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 11 14:37:03.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3259'
Feb 11 14:37:08.777: INFO: stderr: ""
Feb 11 14:37:08.779: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:37:08.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3259" for this suite.
Feb 11 14:37:14.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:37:14.977: INFO: namespace kubectl-3259 deletion completed in 6.174323207s

• [SLOW TEST:11.413 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:37:14.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-7bac7500-e932-43a0-9b70-77f8136b791e
STEP: Creating a pod to test consume secrets
Feb 11 14:37:15.140: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa" in namespace "projected-5912" to be "success or failure"
Feb 11 14:37:15.213: INFO: Pod "pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa": Phase="Pending", Reason="", readiness=false. Elapsed: 72.282598ms
Feb 11 14:37:17.224: INFO: Pod "pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083133364s
Feb 11 14:37:19.233: INFO: Pod "pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092237731s
Feb 11 14:37:21.242: INFO: Pod "pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101738836s
Feb 11 14:37:23.250: INFO: Pod "pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109066544s
STEP: Saw pod success
Feb 11 14:37:23.250: INFO: Pod "pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa" satisfied condition "success or failure"
Feb 11 14:37:23.253: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 14:37:23.366: INFO: Waiting for pod pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa to disappear
Feb 11 14:37:23.379: INFO: Pod pod-projected-secrets-1d00800c-ff86-4234-9812-09c554eedeaa no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:37:23.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5912" for this suite.
Feb 11 14:37:29.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:37:29.547: INFO: namespace projected-5912 deletion completed in 6.160226907s

• [SLOW TEST:14.570 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:37:29.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 11 14:37:29.651: INFO: namespace kubectl-1468
Feb 11 14:37:29.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1468'
Feb 11 14:37:30.171: INFO: stderr: ""
Feb 11 14:37:30.172: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 11 14:37:31.183: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:31.184: INFO: Found 0 / 1
Feb 11 14:37:32.953: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:32.954: INFO: Found 0 / 1
Feb 11 14:37:33.182: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:33.182: INFO: Found 0 / 1
Feb 11 14:37:34.195: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:34.196: INFO: Found 0 / 1
Feb 11 14:37:35.179: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:35.179: INFO: Found 0 / 1
Feb 11 14:37:36.187: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:36.188: INFO: Found 0 / 1
Feb 11 14:37:37.185: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:37.185: INFO: Found 0 / 1
Feb 11 14:37:38.190: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:38.191: INFO: Found 0 / 1
Feb 11 14:37:39.185: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:39.185: INFO: Found 1 / 1
Feb 11 14:37:39.185: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 11 14:37:39.191: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:37:39.191: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 11 14:37:39.191: INFO: wait on redis-master startup in kubectl-1468 
Feb 11 14:37:39.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tbbk9 redis-master --namespace=kubectl-1468'
Feb 11 14:37:39.487: INFO: stderr: ""
Feb 11 14:37:39.487: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Feb 14:37:37.811 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Feb 14:37:37.811 # Server started, Redis version 3.2.12\n1:M 11 Feb 14:37:37.812 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Feb 14:37:37.812 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 11 14:37:39.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1468'
Feb 11 14:37:39.737: INFO: stderr: ""
Feb 11 14:37:39.737: INFO: stdout: "service/rm2 exposed\n"
Feb 11 14:37:39.743: INFO: Service rm2 in namespace kubectl-1468 found.
STEP: exposing service
Feb 11 14:37:41.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1468'
Feb 11 14:37:42.115: INFO: stderr: ""
Feb 11 14:37:42.115: INFO: stdout: "service/rm3 exposed\n"
Feb 11 14:37:42.177: INFO: Service rm3 in namespace kubectl-1468 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:37:44.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1468" for this suite.
Feb 11 14:38:06.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:38:06.355: INFO: namespace kubectl-1468 deletion completed in 22.150359644s

• [SLOW TEST:36.807 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:38:06.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6d2d55e1-ce0f-4cd2-a7b4-5b7df0fdfaf9
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-6d2d55e1-ce0f-4cd2-a7b4-5b7df0fdfaf9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:38:18.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9716" for this suite.
Feb 11 14:38:40.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:38:40.904: INFO: namespace configmap-9716 deletion completed in 22.186408415s

• [SLOW TEST:34.549 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:38:40.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 11 14:38:58.169: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 14:38:58.183: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 14:39:00.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 14:39:00.196: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 14:39:02.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 14:39:02.196: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 14:39:04.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 14:39:04.191: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 14:39:06.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 14:39:06.196: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 14:39:08.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 14:39:08.197: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:39:08.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9064" for this suite.
Feb 11 14:39:30.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:39:30.401: INFO: namespace container-lifecycle-hook-9064 deletion completed in 22.171730162s

• [SLOW TEST:49.497 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:39:30.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:39:30.643: INFO: Waiting up to 5m0s for pod "downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c" in namespace "downward-api-4643" to be "success or failure"
Feb 11 14:39:30.651: INFO: Pod "downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.141085ms
Feb 11 14:39:32.667: INFO: Pod "downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023112208s
Feb 11 14:39:34.694: INFO: Pod "downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050786219s
Feb 11 14:39:36.705: INFO: Pod "downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061761924s
Feb 11 14:39:38.718: INFO: Pod "downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074939337s
STEP: Saw pod success
Feb 11 14:39:38.719: INFO: Pod "downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c" satisfied condition "success or failure"
Feb 11 14:39:38.722: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c container client-container: 
STEP: delete the pod
Feb 11 14:39:38.825: INFO: Waiting for pod downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c to disappear
Feb 11 14:39:38.833: INFO: Pod downwardapi-volume-365b9b26-d0a2-4aa7-aa99-affa1b8e181c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:39:38.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4643" for this suite.
Feb 11 14:39:44.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:39:45.196: INFO: namespace downward-api-4643 deletion completed in 6.352062528s

• [SLOW TEST:14.792 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:39:45.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 11 14:39:53.403: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 11 14:40:08.596: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:40:08.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2795" for this suite.
Feb 11 14:40:14.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:40:14.858: INFO: namespace pods-2795 deletion completed in 6.235690239s

• [SLOW TEST:29.662 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:40:14.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:40:15.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590" in namespace "projected-3666" to be "success or failure"
Feb 11 14:40:15.016: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183821ms
Feb 11 14:40:17.026: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019831275s
Feb 11 14:40:19.039: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033458175s
Feb 11 14:40:21.052: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04599944s
Feb 11 14:40:23.097: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091220364s
Feb 11 14:40:25.109: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590": Phase="Pending", Reason="", readiness=false. Elapsed: 10.102691991s
Feb 11 14:40:27.122: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.116378832s
STEP: Saw pod success
Feb 11 14:40:27.122: INFO: Pod "downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590" satisfied condition "success or failure"
Feb 11 14:40:27.130: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590 container client-container: 
STEP: delete the pod
Feb 11 14:40:27.337: INFO: Waiting for pod downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590 to disappear
Feb 11 14:40:27.349: INFO: Pod downwardapi-volume-ea2b9252-8ea9-417b-838b-2b6b3f885590 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:40:27.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3666" for this suite.
Feb 11 14:40:33.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:40:33.523: INFO: namespace projected-3666 deletion completed in 6.167181555s

• [SLOW TEST:18.658 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:40:33.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 11 14:40:41.173: INFO: 0 pods remaining
Feb 11 14:40:41.173: INFO: 0 pods has nil DeletionTimestamp
Feb 11 14:40:41.173: INFO: 
STEP: Gathering metrics
W0211 14:40:42.203315       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 14:40:42.203: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:40:42.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5669" for this suite.
Feb 11 14:40:52.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:40:52.473: INFO: namespace gc-5669 deletion completed in 10.261459571s

• [SLOW TEST:18.949 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:40:52.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-64441da4-557d-4b1a-8b23-3c5362b73d5b
STEP: Creating a pod to test consume configMaps
Feb 11 14:40:52.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4" in namespace "configmap-4998" to be "success or failure"
Feb 11 14:40:52.628: INFO: Pod "pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049436ms
Feb 11 14:40:54.644: INFO: Pod "pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026379783s
Feb 11 14:40:56.654: INFO: Pod "pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035602538s
Feb 11 14:40:58.675: INFO: Pod "pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05740132s
Feb 11 14:41:00.691: INFO: Pod "pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072995278s
STEP: Saw pod success
Feb 11 14:41:00.691: INFO: Pod "pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4" satisfied condition "success or failure"
Feb 11 14:41:00.705: INFO: Trying to get logs from node iruya-node pod pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4 container configmap-volume-test: 
STEP: delete the pod
Feb 11 14:41:00.775: INFO: Waiting for pod pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4 to disappear
Feb 11 14:41:00.828: INFO: Pod pod-configmaps-40c4dcbc-7dc7-47d7-b30c-d88ba95ed4d4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:41:00.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4998" for this suite.
Feb 11 14:41:06.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:41:07.018: INFO: namespace configmap-4998 deletion completed in 6.184261851s

• [SLOW TEST:14.544 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:41:07.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 14:41:07.154: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49" in namespace "projected-6496" to be "success or failure"
Feb 11 14:41:07.164: INFO: Pod "downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49": Phase="Pending", Reason="", readiness=false. Elapsed: 9.823808ms
Feb 11 14:41:09.182: INFO: Pod "downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027392567s
Feb 11 14:41:11.191: INFO: Pod "downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03623422s
Feb 11 14:41:13.217: INFO: Pod "downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063065801s
Feb 11 14:41:15.226: INFO: Pod "downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071570891s
STEP: Saw pod success
Feb 11 14:41:15.226: INFO: Pod "downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49" satisfied condition "success or failure"
Feb 11 14:41:15.230: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49 container client-container: 
STEP: delete the pod
Feb 11 14:41:15.290: INFO: Waiting for pod downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49 to disappear
Feb 11 14:41:15.301: INFO: Pod downwardapi-volume-e8ca1720-f993-4921-81f5-9c627bdf2f49 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:41:15.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6496" for this suite.
Feb 11 14:41:21.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:41:21.979: INFO: namespace projected-6496 deletion completed in 6.670324856s

• [SLOW TEST:14.960 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:41:21.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 11 14:41:22.151: INFO: Waiting up to 5m0s for pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505" in namespace "emptydir-9548" to be "success or failure"
Feb 11 14:41:22.159: INFO: Pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505": Phase="Pending", Reason="", readiness=false. Elapsed: 7.073864ms
Feb 11 14:41:24.171: INFO: Pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019133763s
Feb 11 14:41:26.177: INFO: Pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0253497s
Feb 11 14:41:28.187: INFO: Pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035233531s
Feb 11 14:41:30.196: INFO: Pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044102418s
Feb 11 14:41:32.204: INFO: Pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051932548s
STEP: Saw pod success
Feb 11 14:41:32.204: INFO: Pod "pod-1c32708e-b3df-4341-ab90-8e41aa4b5505" satisfied condition "success or failure"
Feb 11 14:41:32.207: INFO: Trying to get logs from node iruya-node pod pod-1c32708e-b3df-4341-ab90-8e41aa4b5505 container test-container: 
STEP: delete the pod
Feb 11 14:41:32.289: INFO: Waiting for pod pod-1c32708e-b3df-4341-ab90-8e41aa4b5505 to disappear
Feb 11 14:41:32.298: INFO: Pod pod-1c32708e-b3df-4341-ab90-8e41aa4b5505 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:41:32.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9548" for this suite.
Feb 11 14:41:38.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:41:38.529: INFO: namespace emptydir-9548 deletion completed in 6.223796114s

• [SLOW TEST:16.550 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:41:38.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb 11 14:41:38.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 11 14:41:38.869: INFO: stderr: ""
Feb 11 14:41:38.869: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:41:38.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8316" for this suite.
Feb 11 14:41:44.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:41:45.072: INFO: namespace kubectl-8316 deletion completed in 6.190088621s

• [SLOW TEST:6.542 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:41:45.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:41:45.229: INFO: Create a RollingUpdate DaemonSet
Feb 11 14:41:45.234: INFO: Check that daemon pods launch on every node of the cluster
Feb 11 14:41:45.252: INFO: Number of nodes with available pods: 0
Feb 11 14:41:45.252: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:46.277: INFO: Number of nodes with available pods: 0
Feb 11 14:41:46.278: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:47.266: INFO: Number of nodes with available pods: 0
Feb 11 14:41:47.266: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:48.280: INFO: Number of nodes with available pods: 0
Feb 11 14:41:48.280: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:49.292: INFO: Number of nodes with available pods: 0
Feb 11 14:41:49.292: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:51.199: INFO: Number of nodes with available pods: 0
Feb 11 14:41:51.199: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:51.598: INFO: Number of nodes with available pods: 0
Feb 11 14:41:51.599: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:52.269: INFO: Number of nodes with available pods: 0
Feb 11 14:41:52.269: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:53.314: INFO: Number of nodes with available pods: 0
Feb 11 14:41:53.314: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:54.274: INFO: Number of nodes with available pods: 1
Feb 11 14:41:54.274: INFO: Node iruya-node is running more than one daemon pod
Feb 11 14:41:55.291: INFO: Number of nodes with available pods: 2
Feb 11 14:41:55.292: INFO: Number of running nodes: 2, number of available pods: 2
Feb 11 14:41:55.292: INFO: Update the DaemonSet to trigger a rollout
Feb 11 14:41:55.305: INFO: Updating DaemonSet daemon-set
Feb 11 14:42:03.341: INFO: Roll back the DaemonSet before rollout is complete
Feb 11 14:42:03.356: INFO: Updating DaemonSet daemon-set
Feb 11 14:42:03.356: INFO: Make sure DaemonSet rollback is complete
Feb 11 14:42:03.705: INFO: Wrong image for pod: daemon-set-7n8w6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 11 14:42:03.706: INFO: Pod daemon-set-7n8w6 is not available
Feb 11 14:42:04.731: INFO: Wrong image for pod: daemon-set-7n8w6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 11 14:42:04.731: INFO: Pod daemon-set-7n8w6 is not available
Feb 11 14:42:05.777: INFO: Wrong image for pod: daemon-set-7n8w6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 11 14:42:05.777: INFO: Pod daemon-set-7n8w6 is not available
Feb 11 14:42:06.740: INFO: Wrong image for pod: daemon-set-7n8w6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 11 14:42:06.740: INFO: Pod daemon-set-7n8w6 is not available
Feb 11 14:42:07.733: INFO: Pod daemon-set-lwkbf is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3655, will wait for the garbage collector to delete the pods
Feb 11 14:42:07.907: INFO: Deleting DaemonSet.extensions daemon-set took: 10.996083ms
Feb 11 14:42:08.408: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.775427ms
Feb 11 14:42:15.018: INFO: Number of nodes with available pods: 0
Feb 11 14:42:15.018: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 14:42:15.034: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3655/daemonsets","resourceVersion":"23959721"},"items":null}

Feb 11 14:42:15.037: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3655/pods","resourceVersion":"23959721"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:42:15.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3655" for this suite.
Feb 11 14:42:21.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:42:21.259: INFO: namespace daemonsets-3655 deletion completed in 6.19875237s

• [SLOW TEST:36.186 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:42:21.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 11 14:42:39.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:39.557: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:41.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:41.569: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:43.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:43.568: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:45.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:45.569: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:47.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:47.572: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:49.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:49.568: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:51.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:51.576: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:53.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:53.572: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:55.560: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:55.568: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:57.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:57.567: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:42:59.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:42:59.580: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:43:01.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:43:01.567: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:43:03.558: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:43:03.567: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:43:05.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:43:05.574: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 11 14:43:07.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 11 14:43:07.566: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:43:07.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4932" for this suite.
Feb 11 14:43:29.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:43:29.909: INFO: namespace container-lifecycle-hook-4932 deletion completed in 22.302750151s

• [SLOW TEST:68.650 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:43:29.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 11 14:43:30.055: INFO: Waiting up to 5m0s for pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8" in namespace "var-expansion-4457" to be "success or failure"
Feb 11 14:43:30.066: INFO: Pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.531375ms
Feb 11 14:43:32.076: INFO: Pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021053444s
Feb 11 14:43:34.089: INFO: Pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033492415s
Feb 11 14:43:36.095: INFO: Pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039969172s
Feb 11 14:43:38.143: INFO: Pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08783996s
Feb 11 14:43:40.153: INFO: Pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098326168s
STEP: Saw pod success
Feb 11 14:43:40.154: INFO: Pod "var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8" satisfied condition "success or failure"
Feb 11 14:43:40.159: INFO: Trying to get logs from node iruya-node pod var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8 container dapi-container: 
STEP: delete the pod
Feb 11 14:43:40.293: INFO: Waiting for pod var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8 to disappear
Feb 11 14:43:40.340: INFO: Pod var-expansion-c72b3f58-d77a-49c3-9e63-0ded80ffd0d8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:43:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4457" for this suite.
Feb 11 14:43:46.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:43:46.654: INFO: namespace var-expansion-4457 deletion completed in 6.288922991s

• [SLOW TEST:16.744 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:43:46.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 14:43:46.814: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 11 14:43:51.831: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 11 14:43:55.852: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 11 14:43:55.944: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-372,SelfLink:/apis/apps/v1/namespaces/deployment-372/deployments/test-cleanup-deployment,UID:7b567d49-6e1c-4b98-9228-714f54e852b1,ResourceVersion:23959963,Generation:1,CreationTimestamp:2020-02-11 14:43:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 11 14:43:55.996: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-372,SelfLink:/apis/apps/v1/namespaces/deployment-372/replicasets/test-cleanup-deployment-55bbcbc84c,UID:00982617-6df6-4b21-bf99-3318e4663176,ResourceVersion:23959965,Generation:1,CreationTimestamp:2020-02-11 14:43:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7b567d49-6e1c-4b98-9228-714f54e852b1 0xc000aeaea7 0xc000aeaea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 11 14:43:55.996: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 11 14:43:55.996: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-372,SelfLink:/apis/apps/v1/namespaces/deployment-372/replicasets/test-cleanup-controller,UID:27e74474-b835-4702-976d-97b2b8cb4f92,ResourceVersion:23959964,Generation:1,CreationTimestamp:2020-02-11 14:43:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7b567d49-6e1c-4b98-9228-714f54e852b1 0xc000aeadbf 0xc000aeadd0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 11 14:43:56.021: INFO: Pod "test-cleanup-controller-qmgjt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qmgjt,GenerateName:test-cleanup-controller-,Namespace:deployment-372,SelfLink:/api/v1/namespaces/deployment-372/pods/test-cleanup-controller-qmgjt,UID:f70fdd5d-0fb3-4898-9e63-de4362fd189c,ResourceVersion:23959958,Generation:0,CreationTimestamp:2020-02-11 14:43:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 27e74474-b835-4702-976d-97b2b8cb4f92 0xc002a07907 0xc002a07908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-249tc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-249tc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-249tc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a07980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a079a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:43:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:43:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:43:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:43:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-11 14:43:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-11 14:43:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b735e0981ae1a2a5a71121e450f8b2712ce8d47da65a3f032ec75882116a28fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 11 14:43:56.022: INFO: Pod "test-cleanup-deployment-55bbcbc84c-99dbd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-99dbd,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-372,SelfLink:/api/v1/namespaces/deployment-372/pods/test-cleanup-deployment-55bbcbc84c-99dbd,UID:03fdd561-31a7-4af7-85c2-daa36877c1f2,ResourceVersion:23959968,Generation:0,CreationTimestamp:2020-02-11 14:43:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 00982617-6df6-4b21-bf99-3318e4663176 0xc002a07a87 0xc002a07a88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-249tc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-249tc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-249tc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a07af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a07b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:43:56.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-372" for this suite.
Feb 11 14:44:04.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:44:04.315: INFO: namespace deployment-372 deletion completed in 8.263977428s

• [SLOW TEST:17.661 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:44:04.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 11 14:44:04.533: INFO: Waiting up to 5m0s for pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256" in namespace "downward-api-1191" to be "success or failure"
Feb 11 14:44:04.573: INFO: Pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256": Phase="Pending", Reason="", readiness=false. Elapsed: 39.46459ms
Feb 11 14:44:06.596: INFO: Pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06324918s
Feb 11 14:44:08.609: INFO: Pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075402148s
Feb 11 14:44:10.620: INFO: Pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086448209s
Feb 11 14:44:12.641: INFO: Pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107836684s
Feb 11 14:44:14.685: INFO: Pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.151937204s
STEP: Saw pod success
Feb 11 14:44:14.685: INFO: Pod "downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256" satisfied condition "success or failure"
Feb 11 14:44:14.692: INFO: Trying to get logs from node iruya-node pod downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256 container dapi-container: 
STEP: delete the pod
Feb 11 14:44:14.779: INFO: Waiting for pod downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256 to disappear
Feb 11 14:44:14.819: INFO: Pod downward-api-810cd6fc-d75f-4e10-8c36-d137afcb4256 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:44:14.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1191" for this suite.
Feb 11 14:44:20.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:44:20.995: INFO: namespace downward-api-1191 deletion completed in 6.165931331s

• [SLOW TEST:16.679 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:44:20.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6037
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 14:44:21.058: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 11 14:44:59.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6037 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 14:44:59.838: INFO: >>> kubeConfig: /root/.kube/config
I0211 14:44:59.985622       9 log.go:172] (0xc000a52630) (0xc0013e6280) Create stream
I0211 14:44:59.985842       9 log.go:172] (0xc000a52630) (0xc0013e6280) Stream added, broadcasting: 1
I0211 14:44:59.993797       9 log.go:172] (0xc000a52630) Reply frame received for 1
I0211 14:44:59.993840       9 log.go:172] (0xc000a52630) (0xc000a2a5a0) Create stream
I0211 14:44:59.993850       9 log.go:172] (0xc000a52630) (0xc000a2a5a0) Stream added, broadcasting: 3
I0211 14:44:59.995167       9 log.go:172] (0xc000a52630) Reply frame received for 3
I0211 14:44:59.995200       9 log.go:172] (0xc000a52630) (0xc0020c1ae0) Create stream
I0211 14:44:59.995212       9 log.go:172] (0xc000a52630) (0xc0020c1ae0) Stream added, broadcasting: 5
I0211 14:44:59.996475       9 log.go:172] (0xc000a52630) Reply frame received for 5
I0211 14:45:00.159511       9 log.go:172] (0xc000a52630) Data frame received for 3
I0211 14:45:00.159571       9 log.go:172] (0xc000a2a5a0) (3) Data frame handling
I0211 14:45:00.159596       9 log.go:172] (0xc000a2a5a0) (3) Data frame sent
I0211 14:45:00.329188       9 log.go:172] (0xc000a52630) (0xc000a2a5a0) Stream removed, broadcasting: 3
I0211 14:45:00.329575       9 log.go:172] (0xc000a52630) Data frame received for 1
I0211 14:45:00.329630       9 log.go:172] (0xc0013e6280) (1) Data frame handling
I0211 14:45:00.329666       9 log.go:172] (0xc0013e6280) (1) Data frame sent
I0211 14:45:00.329698       9 log.go:172] (0xc000a52630) (0xc0020c1ae0) Stream removed, broadcasting: 5
I0211 14:45:00.329795       9 log.go:172] (0xc000a52630) (0xc0013e6280) Stream removed, broadcasting: 1
I0211 14:45:00.329825       9 log.go:172] (0xc000a52630) Go away received
I0211 14:45:00.330179       9 log.go:172] (0xc000a52630) (0xc0013e6280) Stream removed, broadcasting: 1
I0211 14:45:00.330213       9 log.go:172] (0xc000a52630) (0xc000a2a5a0) Stream removed, broadcasting: 3
I0211 14:45:00.330225       9 log.go:172] (0xc000a52630) (0xc0020c1ae0) Stream removed, broadcasting: 5
Feb 11 14:45:00.330: INFO: Waiting for endpoints: map[]
Feb 11 14:45:00.342: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6037 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 14:45:00.342: INFO: >>> kubeConfig: /root/.kube/config
I0211 14:45:00.422024       9 log.go:172] (0xc00133a9a0) (0xc00263c460) Create stream
I0211 14:45:00.422132       9 log.go:172] (0xc00133a9a0) (0xc00263c460) Stream added, broadcasting: 1
I0211 14:45:00.429777       9 log.go:172] (0xc00133a9a0) Reply frame received for 1
I0211 14:45:00.429838       9 log.go:172] (0xc00133a9a0) (0xc001c415e0) Create stream
I0211 14:45:00.429848       9 log.go:172] (0xc00133a9a0) (0xc001c415e0) Stream added, broadcasting: 3
I0211 14:45:00.432518       9 log.go:172] (0xc00133a9a0) Reply frame received for 3
I0211 14:45:00.432713       9 log.go:172] (0xc00133a9a0) (0xc00263c500) Create stream
I0211 14:45:00.432744       9 log.go:172] (0xc00133a9a0) (0xc00263c500) Stream added, broadcasting: 5
I0211 14:45:00.442508       9 log.go:172] (0xc00133a9a0) Reply frame received for 5
I0211 14:45:00.751047       9 log.go:172] (0xc00133a9a0) Data frame received for 3
I0211 14:45:00.751180       9 log.go:172] (0xc001c415e0) (3) Data frame handling
I0211 14:45:00.751227       9 log.go:172] (0xc001c415e0) (3) Data frame sent
I0211 14:45:01.014128       9 log.go:172] (0xc00133a9a0) Data frame received for 1
I0211 14:45:01.014285       9 log.go:172] (0xc00133a9a0) (0xc001c415e0) Stream removed, broadcasting: 3
I0211 14:45:01.014350       9 log.go:172] (0xc00263c460) (1) Data frame handling
I0211 14:45:01.014388       9 log.go:172] (0xc00263c460) (1) Data frame sent
I0211 14:45:01.014413       9 log.go:172] (0xc00133a9a0) (0xc00263c500) Stream removed, broadcasting: 5
I0211 14:45:01.014492       9 log.go:172] (0xc00133a9a0) (0xc00263c460) Stream removed, broadcasting: 1
I0211 14:45:01.014506       9 log.go:172] (0xc00133a9a0) Go away received
I0211 14:45:01.014716       9 log.go:172] (0xc00133a9a0) (0xc00263c460) Stream removed, broadcasting: 1
I0211 14:45:01.014750       9 log.go:172] (0xc00133a9a0) (0xc001c415e0) Stream removed, broadcasting: 3
I0211 14:45:01.014772       9 log.go:172] (0xc00133a9a0) (0xc00263c500) Stream removed, broadcasting: 5
Feb 11 14:45:01.014: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:45:01.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6037" for this suite.
Feb 11 14:45:25.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:45:25.146: INFO: namespace pod-network-test-6037 deletion completed in 24.117665895s

• [SLOW TEST:64.149 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:45:25.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-d1002987-b705-4a86-986c-54399795bbf0
STEP: Creating a pod to test consume secrets
Feb 11 14:45:25.273: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7" in namespace "projected-5940" to be "success or failure"
Feb 11 14:45:25.280: INFO: Pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.646659ms
Feb 11 14:45:27.296: INFO: Pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02269926s
Feb 11 14:45:29.303: INFO: Pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029899166s
Feb 11 14:45:31.345: INFO: Pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070922248s
Feb 11 14:45:33.406: INFO: Pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132382615s
Feb 11 14:45:35.418: INFO: Pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144346085s
STEP: Saw pod success
Feb 11 14:45:35.418: INFO: Pod "pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7" satisfied condition "success or failure"
Feb 11 14:45:35.423: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 14:45:35.480: INFO: Waiting for pod pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7 to disappear
Feb 11 14:45:35.487: INFO: Pod pod-projected-secrets-037eb689-6461-4ac6-9218-31df8678dea7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:45:35.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5940" for this suite.
Feb 11 14:45:43.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:45:43.811: INFO: namespace projected-5940 deletion completed in 8.316265166s

• [SLOW TEST:18.665 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:45:43.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4448
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 11 14:45:44.044: INFO: Found 0 stateful pods, waiting for 3
Feb 11 14:45:54.063: INFO: Found 2 stateful pods, waiting for 3
Feb 11 14:46:04.062: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:46:04.063: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:46:04.063: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 11 14:46:14.059: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:46:14.059: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:46:14.059: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 11 14:46:14.097: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 11 14:46:24.159: INFO: Updating stateful set ss2
Feb 11 14:46:24.203: INFO: Waiting for Pod statefulset-4448/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 11 14:46:34.593: INFO: Found 2 stateful pods, waiting for 3
Feb 11 14:46:44.602: INFO: Found 2 stateful pods, waiting for 3
Feb 11 14:46:54.615: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:46:54.615: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:46:54.615: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 11 14:46:54.653: INFO: Updating stateful set ss2
Feb 11 14:46:54.751: INFO: Waiting for Pod statefulset-4448/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 14:47:04.776: INFO: Waiting for Pod statefulset-4448/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 14:47:14.896: INFO: Updating stateful set ss2
Feb 11 14:47:15.024: INFO: Waiting for StatefulSet statefulset-4448/ss2 to complete update
Feb 11 14:47:15.025: INFO: Waiting for Pod statefulset-4448/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 11 14:47:25.051: INFO: Waiting for StatefulSet statefulset-4448/ss2 to complete update
Feb 11 14:47:25.051: INFO: Waiting for Pod statefulset-4448/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 11 14:47:35.066: INFO: Deleting all statefulset in ns statefulset-4448
Feb 11 14:47:35.071: INFO: Scaling statefulset ss2 to 0
Feb 11 14:48:05.117: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:48:05.126: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:48:05.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4448" for this suite.
Feb 11 14:48:11.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:48:11.316: INFO: namespace statefulset-4448 deletion completed in 6.152290324s

• [SLOW TEST:147.504 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:48:11.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4014/configmap-test-2d95c6f7-66da-4c8f-bb96-ac7f6554fc2c
STEP: Creating a pod to test consume configMaps
Feb 11 14:48:11.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63" in namespace "configmap-4014" to be "success or failure"
Feb 11 14:48:11.433: INFO: Pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581905ms
Feb 11 14:48:13.452: INFO: Pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023455567s
Feb 11 14:48:15.486: INFO: Pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057795239s
Feb 11 14:48:17.495: INFO: Pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067431719s
Feb 11 14:48:20.098: INFO: Pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.66972865s
Feb 11 14:48:22.109: INFO: Pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.680711019s
STEP: Saw pod success
Feb 11 14:48:22.109: INFO: Pod "pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63" satisfied condition "success or failure"
Feb 11 14:48:22.116: INFO: Trying to get logs from node iruya-node pod pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63 container env-test: 
STEP: delete the pod
Feb 11 14:48:22.353: INFO: Waiting for pod pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63 to disappear
Feb 11 14:48:22.437: INFO: Pod pod-configmaps-593a8fb0-b5e1-4f63-a3e8-f84bc9cc3c63 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:48:22.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4014" for this suite.
Feb 11 14:48:28.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:48:28.592: INFO: namespace configmap-4014 deletion completed in 6.146441774s

• [SLOW TEST:17.276 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:48:28.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 11 14:48:28.706: INFO: Waiting up to 5m0s for pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53" in namespace "containers-1567" to be "success or failure"
Feb 11 14:48:28.716: INFO: Pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53": Phase="Pending", Reason="", readiness=false. Elapsed: 10.38447ms
Feb 11 14:48:30.725: INFO: Pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019376824s
Feb 11 14:48:32.754: INFO: Pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048057043s
Feb 11 14:48:34.774: INFO: Pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068263486s
Feb 11 14:48:36.860: INFO: Pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53": Phase="Running", Reason="", readiness=true. Elapsed: 8.154298124s
Feb 11 14:48:38.875: INFO: Pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169176631s
STEP: Saw pod success
Feb 11 14:48:38.875: INFO: Pod "client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53" satisfied condition "success or failure"
Feb 11 14:48:38.880: INFO: Trying to get logs from node iruya-node pod client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53 container test-container: 
STEP: delete the pod
Feb 11 14:48:38.979: INFO: Waiting for pod client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53 to disappear
Feb 11 14:48:38.990: INFO: Pod client-containers-ec93c35b-46a8-4687-a255-aaf4f3382c53 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:48:38.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1567" for this suite.
Feb 11 14:48:45.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:48:45.172: INFO: namespace containers-1567 deletion completed in 6.174673005s

• [SLOW TEST:16.579 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:48:45.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 11 14:48:46.633: INFO: Waiting up to 5m0s for pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9" in namespace "emptydir-2597" to be "success or failure"
Feb 11 14:48:46.677: INFO: Pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9": Phase="Pending", Reason="", readiness=false. Elapsed: 44.20952ms
Feb 11 14:48:48.693: INFO: Pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059500681s
Feb 11 14:48:50.713: INFO: Pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07971618s
Feb 11 14:48:52.722: INFO: Pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089183554s
Feb 11 14:48:54.728: INFO: Pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095214082s
Feb 11 14:48:56.737: INFO: Pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103704835s
STEP: Saw pod success
Feb 11 14:48:56.737: INFO: Pod "pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9" satisfied condition "success or failure"
Feb 11 14:48:56.741: INFO: Trying to get logs from node iruya-node pod pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9 container test-container: 
STEP: delete the pod
Feb 11 14:48:56.796: INFO: Waiting for pod pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9 to disappear
Feb 11 14:48:56.846: INFO: Pod pod-a6ae57ea-0303-4d3b-ad45-260f417a63e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:48:56.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2597" for this suite.
Feb 11 14:49:02.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:49:03.022: INFO: namespace emptydir-2597 deletion completed in 6.160774471s

• [SLOW TEST:17.851 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:49:03.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 11 14:49:03.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-269'
Feb 11 14:49:05.722: INFO: stderr: ""
Feb 11 14:49:05.723: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 14:49:05.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:05.951: INFO: stderr: ""
Feb 11 14:49:05.951: INFO: stdout: "update-demo-nautilus-27nt8 update-demo-nautilus-mngbb "
Feb 11 14:49:05.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27nt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:06.233: INFO: stderr: ""
Feb 11 14:49:06.233: INFO: stdout: ""
Feb 11 14:49:06.233: INFO: update-demo-nautilus-27nt8 is created but not running
Feb 11 14:49:11.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:11.870: INFO: stderr: ""
Feb 11 14:49:11.870: INFO: stdout: "update-demo-nautilus-27nt8 update-demo-nautilus-mngbb "
Feb 11 14:49:11.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27nt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:12.325: INFO: stderr: ""
Feb 11 14:49:12.325: INFO: stdout: ""
Feb 11 14:49:12.325: INFO: update-demo-nautilus-27nt8 is created but not running
Feb 11 14:49:17.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:17.501: INFO: stderr: ""
Feb 11 14:49:17.501: INFO: stdout: "update-demo-nautilus-27nt8 update-demo-nautilus-mngbb "
Feb 11 14:49:17.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27nt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:17.662: INFO: stderr: ""
Feb 11 14:49:17.662: INFO: stdout: "true"
Feb 11 14:49:17.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27nt8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:17.751: INFO: stderr: ""
Feb 11 14:49:17.751: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:49:17.751: INFO: validating pod update-demo-nautilus-27nt8
Feb 11 14:49:17.770: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:49:17.770: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:49:17.770: INFO: update-demo-nautilus-27nt8 is verified up and running
Feb 11 14:49:17.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:17.907: INFO: stderr: ""
Feb 11 14:49:17.907: INFO: stdout: "true"
Feb 11 14:49:17.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:18.006: INFO: stderr: ""
Feb 11 14:49:18.006: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:49:18.006: INFO: validating pod update-demo-nautilus-mngbb
Feb 11 14:49:18.016: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:49:18.016: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:49:18.016: INFO: update-demo-nautilus-mngbb is verified up and running
STEP: scaling down the replication controller
Feb 11 14:49:18.019: INFO: scanned /root for discovery docs: 
Feb 11 14:49:18.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-269'
Feb 11 14:49:19.280: INFO: stderr: ""
Feb 11 14:49:19.280: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 14:49:19.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:19.484: INFO: stderr: ""
Feb 11 14:49:19.484: INFO: stdout: "update-demo-nautilus-27nt8 update-demo-nautilus-mngbb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 11 14:49:24.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:24.736: INFO: stderr: ""
Feb 11 14:49:24.736: INFO: stdout: "update-demo-nautilus-27nt8 update-demo-nautilus-mngbb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 11 14:49:29.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:29.956: INFO: stderr: ""
Feb 11 14:49:29.957: INFO: stdout: "update-demo-nautilus-mngbb "
Feb 11 14:49:29.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:30.117: INFO: stderr: ""
Feb 11 14:49:30.117: INFO: stdout: "true"
Feb 11 14:49:30.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:30.207: INFO: stderr: ""
Feb 11 14:49:30.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:49:30.208: INFO: validating pod update-demo-nautilus-mngbb
Feb 11 14:49:30.239: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:49:30.239: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:49:30.239: INFO: update-demo-nautilus-mngbb is verified up and running
STEP: scaling up the replication controller
Feb 11 14:49:30.242: INFO: scanned /root for discovery docs: 
Feb 11 14:49:30.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-269'
Feb 11 14:49:31.452: INFO: stderr: ""
Feb 11 14:49:31.453: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 14:49:31.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:31.608: INFO: stderr: ""
Feb 11 14:49:31.609: INFO: stdout: "update-demo-nautilus-mngbb update-demo-nautilus-rbf2b "
Feb 11 14:49:31.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:31.714: INFO: stderr: ""
Feb 11 14:49:31.714: INFO: stdout: "true"
Feb 11 14:49:31.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:31.874: INFO: stderr: ""
Feb 11 14:49:31.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:49:31.874: INFO: validating pod update-demo-nautilus-mngbb
Feb 11 14:49:31.882: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:49:31.882: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:49:31.882: INFO: update-demo-nautilus-mngbb is verified up and running
Feb 11 14:49:31.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbf2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:32.014: INFO: stderr: ""
Feb 11 14:49:32.015: INFO: stdout: ""
Feb 11 14:49:32.015: INFO: update-demo-nautilus-rbf2b is created but not running
Feb 11 14:49:37.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:37.182: INFO: stderr: ""
Feb 11 14:49:37.182: INFO: stdout: "update-demo-nautilus-mngbb update-demo-nautilus-rbf2b "
Feb 11 14:49:37.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:37.320: INFO: stderr: ""
Feb 11 14:49:37.320: INFO: stdout: "true"
Feb 11 14:49:37.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:37.412: INFO: stderr: ""
Feb 11 14:49:37.412: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:49:37.412: INFO: validating pod update-demo-nautilus-mngbb
Feb 11 14:49:37.417: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:49:37.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:49:37.417: INFO: update-demo-nautilus-mngbb is verified up and running
Feb 11 14:49:37.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbf2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:37.516: INFO: stderr: ""
Feb 11 14:49:37.517: INFO: stdout: ""
Feb 11 14:49:37.517: INFO: update-demo-nautilus-rbf2b is created but not running
Feb 11 14:49:42.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-269'
Feb 11 14:49:42.669: INFO: stderr: ""
Feb 11 14:49:42.669: INFO: stdout: "update-demo-nautilus-mngbb update-demo-nautilus-rbf2b "
Feb 11 14:49:42.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:42.789: INFO: stderr: ""
Feb 11 14:49:42.789: INFO: stdout: "true"
Feb 11 14:49:42.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mngbb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:42.903: INFO: stderr: ""
Feb 11 14:49:42.903: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:49:42.903: INFO: validating pod update-demo-nautilus-mngbb
Feb 11 14:49:42.911: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:49:42.911: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:49:42.911: INFO: update-demo-nautilus-mngbb is verified up and running
Feb 11 14:49:42.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbf2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:43.036: INFO: stderr: ""
Feb 11 14:49:43.036: INFO: stdout: "true"
Feb 11 14:49:43.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbf2b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-269'
Feb 11 14:49:43.140: INFO: stderr: ""
Feb 11 14:49:43.141: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:49:43.141: INFO: validating pod update-demo-nautilus-rbf2b
Feb 11 14:49:43.156: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:49:43.156: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:49:43.157: INFO: update-demo-nautilus-rbf2b is verified up and running
STEP: using delete to clean up resources
Feb 11 14:49:43.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-269'
Feb 11 14:49:43.290: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 14:49:43.290: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 11 14:49:43.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-269'
Feb 11 14:49:43.416: INFO: stderr: "No resources found.\n"
Feb 11 14:49:43.417: INFO: stdout: ""
Feb 11 14:49:43.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-269 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 11 14:49:43.635: INFO: stderr: ""
Feb 11 14:49:43.636: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:49:43.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-269" for this suite.
Feb 11 14:50:06.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:50:06.794: INFO: namespace kubectl-269 deletion completed in 22.891219602s

• [SLOW TEST:63.770 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:50:06.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 11 14:50:06.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5169'
Feb 11 14:50:07.514: INFO: stderr: ""
Feb 11 14:50:07.514: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 11 14:50:08.530: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:08.530: INFO: Found 0 / 1
Feb 11 14:50:09.526: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:09.526: INFO: Found 0 / 1
Feb 11 14:50:10.532: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:10.532: INFO: Found 0 / 1
Feb 11 14:50:11.527: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:11.527: INFO: Found 0 / 1
Feb 11 14:50:12.531: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:12.531: INFO: Found 0 / 1
Feb 11 14:50:13.525: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:13.525: INFO: Found 0 / 1
Feb 11 14:50:14.527: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:14.527: INFO: Found 0 / 1
Feb 11 14:50:15.526: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:15.526: INFO: Found 0 / 1
Feb 11 14:50:16.533: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:16.534: INFO: Found 1 / 1
Feb 11 14:50:16.534: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 11 14:50:16.542: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 14:50:16.542: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 11 14:50:16.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-89vgb redis-master --namespace=kubectl-5169'
Feb 11 14:50:16.783: INFO: stderr: ""
Feb 11 14:50:16.783: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Feb 14:50:14.277 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Feb 14:50:14.277 # Server started, Redis version 3.2.12\n1:M 11 Feb 14:50:14.278 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Feb 14:50:14.278 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 11 14:50:16.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-89vgb redis-master --namespace=kubectl-5169 --tail=1'
Feb 11 14:50:16.938: INFO: stderr: ""
Feb 11 14:50:16.938: INFO: stdout: "1:M 11 Feb 14:50:14.278 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 11 14:50:16.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-89vgb redis-master --namespace=kubectl-5169 --limit-bytes=1'
Feb 11 14:50:17.116: INFO: stderr: ""
Feb 11 14:50:17.116: INFO: stdout: " "
STEP: exposing timestamps
Feb 11 14:50:17.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-89vgb redis-master --namespace=kubectl-5169 --tail=1 --timestamps'
Feb 11 14:50:17.275: INFO: stderr: ""
Feb 11 14:50:17.275: INFO: stdout: "2020-02-11T14:50:14.288480268Z 1:M 11 Feb 14:50:14.278 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 11 14:50:19.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-89vgb redis-master --namespace=kubectl-5169 --since=1s'
Feb 11 14:50:20.126: INFO: stderr: ""
Feb 11 14:50:20.127: INFO: stdout: ""
Feb 11 14:50:20.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-89vgb redis-master --namespace=kubectl-5169 --since=24h'
Feb 11 14:50:20.262: INFO: stderr: ""
Feb 11 14:50:20.262: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Feb 14:50:14.277 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Feb 14:50:14.277 # Server started, Redis version 3.2.12\n1:M 11 Feb 14:50:14.278 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Feb 14:50:14.278 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 11 14:50:20.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5169'
Feb 11 14:50:20.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 14:50:20.397: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 11 14:50:20.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5169'
Feb 11 14:50:20.596: INFO: stderr: "No resources found.\n"
Feb 11 14:50:20.597: INFO: stdout: ""
Feb 11 14:50:20.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5169 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 11 14:50:20.752: INFO: stderr: ""
Feb 11 14:50:20.752: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:50:20.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5169" for this suite.
Feb 11 14:50:42.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:50:42.940: INFO: namespace kubectl-5169 deletion completed in 22.179284921s

• [SLOW TEST:36.146 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:50:42.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5917
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5917
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5917
Feb 11 14:50:43.139: INFO: Found 0 stateful pods, waiting for 1
Feb 11 14:50:53.149: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 11 14:50:53.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:50:54.009: INFO: stderr: "I0211 14:50:53.493923    2585 log.go:172] (0xc000a58370) (0xc0006d86e0) Create stream\nI0211 14:50:53.494356    2585 log.go:172] (0xc000a58370) (0xc0006d86e0) Stream added, broadcasting: 1\nI0211 14:50:53.518466    2585 log.go:172] (0xc000a58370) Reply frame received for 1\nI0211 14:50:53.518542    2585 log.go:172] (0xc000a58370) (0xc000622320) Create stream\nI0211 14:50:53.518570    2585 log.go:172] (0xc000a58370) (0xc000622320) Stream added, broadcasting: 3\nI0211 14:50:53.520479    2585 log.go:172] (0xc000a58370) Reply frame received for 3\nI0211 14:50:53.520530    2585 log.go:172] (0xc000a58370) (0xc0006d8000) Create stream\nI0211 14:50:53.520543    2585 log.go:172] (0xc000a58370) (0xc0006d8000) Stream added, broadcasting: 5\nI0211 14:50:53.522465    2585 log.go:172] (0xc000a58370) Reply frame received for 5\nI0211 14:50:53.658348    2585 log.go:172] (0xc000a58370) Data frame received for 5\nI0211 14:50:53.658522    2585 log.go:172] (0xc0006d8000) (5) Data frame handling\nI0211 14:50:53.658598    2585 log.go:172] (0xc0006d8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:50:53.707140    2585 log.go:172] (0xc000a58370) Data frame received for 3\nI0211 14:50:53.707182    2585 log.go:172] (0xc000622320) (3) Data frame handling\nI0211 14:50:53.707211    2585 log.go:172] (0xc000622320) (3) Data frame sent\nI0211 14:50:53.980374    2585 log.go:172] (0xc000a58370) Data frame received for 1\nI0211 14:50:53.980635    2585 log.go:172] (0xc000a58370) (0xc000622320) Stream removed, broadcasting: 3\nI0211 14:50:53.980890    2585 log.go:172] (0xc000a58370) (0xc0006d8000) Stream removed, broadcasting: 5\nI0211 14:50:53.981019    2585 log.go:172] (0xc0006d86e0) (1) Data frame handling\nI0211 14:50:53.981093    2585 log.go:172] (0xc0006d86e0) (1) Data frame sent\nI0211 14:50:53.981121    2585 log.go:172] (0xc000a58370) (0xc0006d86e0) Stream removed, broadcasting: 1\nI0211 14:50:53.981156    2585 log.go:172] (0xc000a58370) Go away received\nI0211 14:50:53.982854    2585 log.go:172] (0xc000a58370) (0xc0006d86e0) Stream removed, broadcasting: 1\nI0211 14:50:53.982891    2585 log.go:172] (0xc000a58370) (0xc000622320) Stream removed, broadcasting: 3\nI0211 14:50:53.982916    2585 log.go:172] (0xc000a58370) (0xc0006d8000) Stream removed, broadcasting: 5\n"
Feb 11 14:50:54.010: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:50:54.010: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:50:54.029: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 11 14:51:04.046: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:51:04.047: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:51:04.082: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 11 14:51:04.082: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:04.082: INFO: 
Feb 11 14:51:04.082: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 11 14:51:05.477: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994294414s
Feb 11 14:51:06.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.597137908s
Feb 11 14:51:07.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.333199033s
Feb 11 14:51:08.778: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.312481125s
Feb 11 14:51:09.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.297893695s
Feb 11 14:51:10.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.232823455s
Feb 11 14:51:11.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.213053138s
Feb 11 14:51:12.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.109822845s
Feb 11 14:51:13.999: INFO: Verifying statefulset ss doesn't scale past 3 for another 94.479125ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5917
Feb 11 14:51:15.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:51:15.676: INFO: stderr: "I0211 14:51:15.442443    2606 log.go:172] (0xc0009900b0) (0xc0009825a0) Create stream\nI0211 14:51:15.442994    2606 log.go:172] (0xc0009900b0) (0xc0009825a0) Stream added, broadcasting: 1\nI0211 14:51:15.448178    2606 log.go:172] (0xc0009900b0) Reply frame received for 1\nI0211 14:51:15.448222    2606 log.go:172] (0xc0009900b0) (0xc0005d4320) Create stream\nI0211 14:51:15.448236    2606 log.go:172] (0xc0009900b0) (0xc0005d4320) Stream added, broadcasting: 3\nI0211 14:51:15.449866    2606 log.go:172] (0xc0009900b0) Reply frame received for 3\nI0211 14:51:15.449914    2606 log.go:172] (0xc0009900b0) (0xc000366000) Create stream\nI0211 14:51:15.449926    2606 log.go:172] (0xc0009900b0) (0xc000366000) Stream added, broadcasting: 5\nI0211 14:51:15.451254    2606 log.go:172] (0xc0009900b0) Reply frame received for 5\nI0211 14:51:15.543760    2606 log.go:172] (0xc0009900b0) Data frame received for 3\nI0211 14:51:15.543875    2606 log.go:172] (0xc0005d4320) (3) Data frame handling\nI0211 14:51:15.543908    2606 log.go:172] (0xc0005d4320) (3) Data frame sent\nI0211 14:51:15.543961    2606 log.go:172] (0xc0009900b0) Data frame received for 5\nI0211 14:51:15.543984    2606 log.go:172] (0xc000366000) (5) Data frame handling\nI0211 14:51:15.544006    2606 log.go:172] (0xc000366000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0211 14:51:15.661784    2606 log.go:172] (0xc0009900b0) Data frame received for 1\nI0211 14:51:15.662507    2606 log.go:172] (0xc0009900b0) (0xc000366000) Stream removed, broadcasting: 5\nI0211 14:51:15.662663    2606 log.go:172] (0xc0009825a0) (1) Data frame handling\nI0211 14:51:15.662741    2606 log.go:172] (0xc0009900b0) (0xc0005d4320) Stream removed, broadcasting: 3\nI0211 14:51:15.662887    2606 log.go:172] (0xc0009825a0) (1) Data frame sent\nI0211 14:51:15.662920    2606 log.go:172] (0xc0009900b0) (0xc0009825a0) Stream removed, broadcasting: 1\nI0211 14:51:15.662948    2606 log.go:172] (0xc0009900b0) Go away received\nI0211 14:51:15.664828    2606 log.go:172] (0xc0009900b0) (0xc0009825a0) Stream removed, broadcasting: 1\nI0211 14:51:15.664851    2606 log.go:172] (0xc0009900b0) (0xc0005d4320) Stream removed, broadcasting: 3\nI0211 14:51:15.664872    2606 log.go:172] (0xc0009900b0) (0xc000366000) Stream removed, broadcasting: 5\n"
Feb 11 14:51:15.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 14:51:15.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 14:51:15.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:51:16.082: INFO: stderr: "I0211 14:51:15.874595    2625 log.go:172] (0xc0009fa420) (0xc000a268c0) Create stream\nI0211 14:51:15.874822    2625 log.go:172] (0xc0009fa420) (0xc000a268c0) Stream added, broadcasting: 1\nI0211 14:51:15.880137    2625 log.go:172] (0xc0009fa420) Reply frame received for 1\nI0211 14:51:15.880214    2625 log.go:172] (0xc0009fa420) (0xc0003319a0) Create stream\nI0211 14:51:15.880268    2625 log.go:172] (0xc0009fa420) (0xc0003319a0) Stream added, broadcasting: 3\nI0211 14:51:15.882057    2625 log.go:172] (0xc0009fa420) Reply frame received for 3\nI0211 14:51:15.882102    2625 log.go:172] (0xc0009fa420) (0xc000678140) Create stream\nI0211 14:51:15.882118    2625 log.go:172] (0xc0009fa420) (0xc000678140) Stream added, broadcasting: 5\nI0211 14:51:15.883870    2625 log.go:172] (0xc0009fa420) Reply frame received for 5\nI0211 14:51:15.966056    2625 log.go:172] (0xc0009fa420) Data frame received for 3\nI0211 14:51:15.966147    2625 log.go:172] (0xc0003319a0) (3) Data frame handling\nI0211 14:51:15.966176    2625 log.go:172] (0xc0003319a0) (3) Data frame sent\nI0211 14:51:15.966462    2625 log.go:172] (0xc0009fa420) Data frame received for 5\nI0211 14:51:15.966477    2625 log.go:172] (0xc000678140) (5) Data frame handling\nI0211 14:51:15.966487    2625 log.go:172] (0xc000678140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0211 14:51:16.070776    2625 log.go:172] (0xc0009fa420) (0xc0003319a0) Stream removed, broadcasting: 3\nI0211 14:51:16.071333    2625 log.go:172] (0xc0009fa420) Data frame received for 1\nI0211 14:51:16.071491    2625 log.go:172] (0xc0009fa420) (0xc000678140) Stream removed, broadcasting: 5\nI0211 14:51:16.071883    2625 log.go:172] (0xc000a268c0) (1) Data frame handling\nI0211 14:51:16.071913    2625 log.go:172] (0xc000a268c0) (1) Data frame sent\nI0211 14:51:16.071925    2625 log.go:172] (0xc0009fa420) (0xc000a268c0) Stream removed, broadcasting: 1\nI0211 14:51:16.071943    2625 log.go:172] (0xc0009fa420) Go away received\nI0211 14:51:16.073144    2625 log.go:172] (0xc0009fa420) (0xc000a268c0) Stream removed, broadcasting: 1\nI0211 14:51:16.073161    2625 log.go:172] (0xc0009fa420) (0xc0003319a0) Stream removed, broadcasting: 3\nI0211 14:51:16.073172    2625 log.go:172] (0xc0009fa420) (0xc000678140) Stream removed, broadcasting: 5\n"
Feb 11 14:51:16.082: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 14:51:16.082: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 14:51:16.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:51:16.731: INFO: stderr: "I0211 14:51:16.371595    2645 log.go:172] (0xc000118dc0) (0xc000588820) Create stream\nI0211 14:51:16.371852    2645 log.go:172] (0xc000118dc0) (0xc000588820) Stream added, broadcasting: 1\nI0211 14:51:16.378428    2645 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0211 14:51:16.378581    2645 log.go:172] (0xc000118dc0) (0xc00094e000) Create stream\nI0211 14:51:16.378616    2645 log.go:172] (0xc000118dc0) (0xc00094e000) Stream added, broadcasting: 3\nI0211 14:51:16.380607    2645 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0211 14:51:16.380651    2645 log.go:172] (0xc000118dc0) (0xc0005888c0) Create stream\nI0211 14:51:16.380661    2645 log.go:172] (0xc000118dc0) (0xc0005888c0) Stream added, broadcasting: 5\nI0211 14:51:16.382502    2645 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0211 14:51:16.514251    2645 log.go:172] (0xc000118dc0) Data frame received for 5\nI0211 14:51:16.514536    2645 log.go:172] (0xc0005888c0) (5) Data frame handling\nI0211 14:51:16.514614    2645 log.go:172] (0xc0005888c0) (5) Data frame sent\nI0211 14:51:16.514658    2645 log.go:172] (0xc000118dc0) Data frame received for 3\nI0211 14:51:16.514690    2645 log.go:172] (0xc00094e000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0211 14:51:16.514745    2645 log.go:172] (0xc00094e000) (3) Data frame sent\nI0211 14:51:16.708063    2645 log.go:172] (0xc000118dc0) (0xc00094e000) Stream removed, broadcasting: 3\nI0211 14:51:16.708452    2645 log.go:172] (0xc000118dc0) (0xc0005888c0) Stream removed, broadcasting: 5\nI0211 14:51:16.708835    2645 log.go:172] (0xc000118dc0) Data frame received for 1\nI0211 14:51:16.709578    2645 log.go:172] (0xc000588820) (1) Data frame handling\nI0211 14:51:16.709715    2645 log.go:172] (0xc000588820) (1) Data frame sent\nI0211 14:51:16.710181    2645 log.go:172] (0xc000118dc0) (0xc000588820) Stream removed, broadcasting: 1\nI0211 14:51:16.710307    2645 log.go:172] (0xc000118dc0) Go away received\nI0211 14:51:16.713235    2645 log.go:172] (0xc000118dc0) (0xc000588820) Stream removed, broadcasting: 1\nI0211 14:51:16.713268    2645 log.go:172] (0xc000118dc0) (0xc00094e000) Stream removed, broadcasting: 3\nI0211 14:51:16.713290    2645 log.go:172] (0xc000118dc0) (0xc0005888c0) Stream removed, broadcasting: 5\n"
Feb 11 14:51:16.731: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 11 14:51:16.731: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 11 14:51:16.746: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:51:16.746: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 14:51:16.746: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 11 14:51:16.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:51:17.275: INFO: stderr: "I0211 14:51:16.972415    2665 log.go:172] (0xc000116e70) (0xc000556a00) Create stream\nI0211 14:51:16.972684    2665 log.go:172] (0xc000116e70) (0xc000556a00) Stream added, broadcasting: 1\nI0211 14:51:16.981262    2665 log.go:172] (0xc000116e70) Reply frame received for 1\nI0211 14:51:16.981331    2665 log.go:172] (0xc000116e70) (0xc000742000) Create stream\nI0211 14:51:16.981345    2665 log.go:172] (0xc000116e70) (0xc000742000) Stream added, broadcasting: 3\nI0211 14:51:16.982873    2665 log.go:172] (0xc000116e70) Reply frame received for 3\nI0211 14:51:16.982928    2665 log.go:172] (0xc000116e70) (0xc0007f2000) Create stream\nI0211 14:51:16.982957    2665 log.go:172] (0xc000116e70) (0xc0007f2000) Stream added, broadcasting: 5\nI0211 14:51:16.984529    2665 log.go:172] (0xc000116e70) Reply frame received for 5\nI0211 14:51:17.085675    2665 log.go:172] (0xc000116e70) Data frame received for 3\nI0211 14:51:17.086094    2665 log.go:172] (0xc000742000) (3) Data frame handling\nI0211 14:51:17.086159    2665 log.go:172] (0xc000742000) (3) Data frame sent\nI0211 14:51:17.086488    2665 log.go:172] (0xc000116e70) Data frame received for 5\nI0211 14:51:17.086508    2665 log.go:172] (0xc0007f2000) (5) Data frame handling\nI0211 14:51:17.086523    2665 log.go:172] (0xc0007f2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:51:17.254433    2665 log.go:172] (0xc000116e70) (0xc000742000) Stream removed, broadcasting: 3\nI0211 14:51:17.254714    2665 log.go:172] (0xc000116e70) Data frame received for 1\nI0211 14:51:17.254959    2665 log.go:172] (0xc000116e70) (0xc0007f2000) Stream removed, broadcasting: 5\nI0211 14:51:17.255055    2665 log.go:172] (0xc000556a00) (1) Data frame handling\nI0211 14:51:17.255097    2665 log.go:172] (0xc000556a00) (1) Data frame sent\nI0211 14:51:17.255113    2665 log.go:172] (0xc000116e70) (0xc000556a00) Stream removed, broadcasting: 1\nI0211 14:51:17.255134    2665 log.go:172] (0xc000116e70) Go away received\nI0211 14:51:17.256309    2665 log.go:172] (0xc000116e70) (0xc000556a00) Stream removed, broadcasting: 1\nI0211 14:51:17.256335    2665 log.go:172] (0xc000116e70) (0xc000742000) Stream removed, broadcasting: 3\nI0211 14:51:17.256347    2665 log.go:172] (0xc000116e70) (0xc0007f2000) Stream removed, broadcasting: 5\n"
Feb 11 14:51:17.276: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:51:17.276: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:51:17.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:51:17.614: INFO: stderr: "I0211 14:51:17.424748    2685 log.go:172] (0xc000846420) (0xc0003e26e0) Create stream\nI0211 14:51:17.425000    2685 log.go:172] (0xc000846420) (0xc0003e26e0) Stream added, broadcasting: 1\nI0211 14:51:17.430480    2685 log.go:172] (0xc000846420) Reply frame received for 1\nI0211 14:51:17.430540    2685 log.go:172] (0xc000846420) (0xc00062a500) Create stream\nI0211 14:51:17.430576    2685 log.go:172] (0xc000846420) (0xc00062a500) Stream added, broadcasting: 3\nI0211 14:51:17.431564    2685 log.go:172] (0xc000846420) Reply frame received for 3\nI0211 14:51:17.431587    2685 log.go:172] (0xc000846420) (0xc00044a000) Create stream\nI0211 14:51:17.431599    2685 log.go:172] (0xc000846420) (0xc00044a000) Stream added, broadcasting: 5\nI0211 14:51:17.432390    2685 log.go:172] (0xc000846420) Reply frame received for 5\nI0211 14:51:17.514676    2685 log.go:172] (0xc000846420) Data frame received for 5\nI0211 14:51:17.514735    2685 log.go:172] (0xc00044a000) (5) Data frame handling\nI0211 14:51:17.514762    2685 log.go:172] (0xc00044a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:51:17.531433    2685 log.go:172] (0xc000846420) Data frame received for 3\nI0211 14:51:17.531459    2685 log.go:172] (0xc00062a500) (3) Data frame handling\nI0211 14:51:17.531477    2685 log.go:172] (0xc00062a500) (3) Data frame sent\nI0211 14:51:17.604589    2685 log.go:172] (0xc000846420) (0xc00044a000) Stream removed, broadcasting: 5\nI0211 14:51:17.604679    2685 log.go:172] (0xc000846420) Data frame received for 1\nI0211 14:51:17.604737    2685 log.go:172] (0xc000846420) (0xc00062a500) Stream removed, broadcasting: 3\nI0211 14:51:17.604784    2685 log.go:172] (0xc0003e26e0) (1) Data frame handling\nI0211 14:51:17.604799    2685 log.go:172] (0xc0003e26e0) (1) Data frame sent\nI0211 14:51:17.604810    2685 log.go:172] (0xc000846420) (0xc0003e26e0) Stream removed, broadcasting: 1\nI0211 14:51:17.604825    2685 log.go:172] (0xc000846420) Go away received\nI0211 14:51:17.605743    2685 log.go:172] (0xc000846420) (0xc0003e26e0) Stream removed, broadcasting: 1\nI0211 14:51:17.605754    2685 log.go:172] (0xc000846420) (0xc00062a500) Stream removed, broadcasting: 3\nI0211 14:51:17.605758    2685 log.go:172] (0xc000846420) (0xc00044a000) Stream removed, broadcasting: 5\n"
Feb 11 14:51:17.614: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:51:17.614: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:51:17.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 11 14:51:18.325: INFO: stderr: "I0211 14:51:17.805170    2704 log.go:172] (0xc0004ac630) (0xc0006eeaa0) Create stream\nI0211 14:51:17.805489    2704 log.go:172] (0xc0004ac630) (0xc0006eeaa0) Stream added, broadcasting: 1\nI0211 14:51:17.825238    2704 log.go:172] (0xc0004ac630) Reply frame received for 1\nI0211 14:51:17.825373    2704 log.go:172] (0xc0004ac630) (0xc000384000) Create stream\nI0211 14:51:17.825389    2704 log.go:172] (0xc0004ac630) (0xc000384000) Stream added, broadcasting: 3\nI0211 14:51:17.826418    2704 log.go:172] (0xc0004ac630) Reply frame received for 3\nI0211 14:51:17.826447    2704 log.go:172] (0xc0004ac630) (0xc0006ee320) Create stream\nI0211 14:51:17.826454    2704 log.go:172] (0xc0004ac630) (0xc0006ee320) Stream added, broadcasting: 5\nI0211 14:51:17.828814    2704 log.go:172] (0xc0004ac630) Reply frame received for 5\nI0211 14:51:18.046149    2704 log.go:172] (0xc0004ac630) Data frame received for 5\nI0211 14:51:18.047184    2704 log.go:172] (0xc0006ee320) (5) Data frame handling\nI0211 14:51:18.047385    2704 log.go:172] (0xc0006ee320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0211 14:51:18.122595    2704 log.go:172] (0xc0004ac630) Data frame received for 3\nI0211 14:51:18.122817    2704 log.go:172] (0xc000384000) (3) Data frame handling\nI0211 14:51:18.122908    2704 log.go:172] (0xc000384000) (3) Data frame sent\nI0211 14:51:18.306445    2704 log.go:172] (0xc0004ac630) (0xc000384000) Stream removed, broadcasting: 3\nI0211 14:51:18.306883    2704 log.go:172] (0xc0004ac630) Data frame received for 1\nI0211 14:51:18.307095    2704 log.go:172] (0xc0006eeaa0) (1) Data frame handling\nI0211 14:51:18.307161    2704 log.go:172] (0xc0006eeaa0) (1) Data frame sent\nI0211 14:51:18.307245    2704 log.go:172] (0xc0004ac630) (0xc0006ee320) Stream removed, broadcasting: 5\nI0211 14:51:18.307470    2704 log.go:172] (0xc0004ac630) (0xc0006eeaa0) Stream removed, broadcasting: 1\nI0211 14:51:18.307524    2704 log.go:172] (0xc0004ac630) Go away received\nI0211 14:51:18.309474    2704 log.go:172] (0xc0004ac630) (0xc0006eeaa0) Stream removed, broadcasting: 1\nI0211 14:51:18.309509    2704 log.go:172] (0xc0004ac630) (0xc000384000) Stream removed, broadcasting: 3\nI0211 14:51:18.309522    2704 log.go:172] (0xc0004ac630) (0xc0006ee320) Stream removed, broadcasting: 5\n"
Feb 11 14:51:18.326: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 11 14:51:18.326: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 11 14:51:18.326: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:51:18.345: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 11 14:51:28.363: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:51:28.363: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:51:28.363: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 14:51:28.397: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:28.397: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:28.397: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:28.397: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:28.397: INFO: 
Feb 11 14:51:28.397: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:30.120: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:30.120: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:30.120: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:30.120: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:30.120: INFO: 
Feb 11 14:51:30.120: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:32.185: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:32.185: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:32.185: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:32.185: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:32.185: INFO: 
Feb 11 14:51:32.185: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:33.197: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:33.197: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:33.197: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:33.197: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:33.197: INFO: 
Feb 11 14:51:33.197: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:34.207: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:34.207: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:34.207: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:34.207: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:34.207: INFO: 
Feb 11 14:51:34.207: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:35.216: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:35.216: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:35.216: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:35.216: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:35.216: INFO: 
Feb 11 14:51:35.216: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:36.231: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:36.232: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:36.232: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:36.232: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:36.232: INFO: 
Feb 11 14:51:36.232: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:37.244: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 14:51:37.244: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:37.244: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:37.244: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:37.244: INFO: 
Feb 11 14:51:37.244: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 14:51:38.253: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 11 14:51:38.253: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:50:43 +0000 UTC  }]
Feb 11 14:51:38.253: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 14:51:04 +0000 UTC  }]
Feb 11 14:51:38.253: INFO: 
Feb 11 14:51:38.253: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5917
Feb 11 14:51:39.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:51:39.649: INFO: rc: 1
Feb 11 14:51:39.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0025f6390 exit status 1   true [0xc0000ebad8 0xc0000ebb20 0xc0000ebb88] [0xc0000ebad8 0xc0000ebb20 0xc0000ebb88] [0xc0000ebb00 0xc0000ebb78] [0xba6c50 0xba6c50] 0xc00270d6e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 11 14:51:49.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:51:49.823: INFO: rc: 1
Feb 11 14:51:49.823: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002db6210 exit status 1   true [0xc002615430 0xc002615460 0xc002615498] [0xc002615430 0xc002615460 0xc002615498] [0xc002615458 0xc002615478] [0xba6c50 0xba6c50] 0xc002674c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:51:59.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:52:00.007: INFO: rc: 1
Feb 11 14:52:00.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e07da0 exit status 1   true [0xc002112358 0xc002112370 0xc002112388] [0xc002112358 0xc002112370 0xc002112388] [0xc002112368 0xc002112380] [0xba6c50 0xba6c50] 0xc001c77080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:52:10.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:52:10.206: INFO: rc: 1
Feb 11 14:52:10.206: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00161e090 exit status 1   true [0xc0007283e0 0xc0007284e8 0xc0007286a0] [0xc0007283e0 0xc0007284e8 0xc0007286a0] [0xc000728490 0xc000728638] [0xba6c50 0xba6c50] 0xc003364e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:52:20.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:52:20.388: INFO: rc: 1
Feb 11 14:52:20.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c0f0 exit status 1   true [0xc000722038 0xc000722300 0xc0007226f0] [0xc000722038 0xc000722300 0xc0007226f0] [0xc000722290 0xc000722568] [0xba6c50 0xba6c50] 0xc002358de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:52:30.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:52:30.597: INFO: rc: 1
Feb 11 14:52:30.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00161e150 exit status 1   true [0xc000728700 0xc000728898 0xc000728d80] [0xc000728700 0xc000728898 0xc000728d80] [0xc0007287d8 0xc000728d68] [0xba6c50 0xba6c50] 0xc003365260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:52:40.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:52:40.754: INFO: rc: 1
Feb 11 14:52:40.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0035d00c0 exit status 1   true [0xc002112000 0xc002112018 0xc002112030] [0xc002112000 0xc002112018 0xc002112030] [0xc002112010 0xc002112028] [0xba6c50 0xba6c50] 0xc001a929c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:52:50.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:52:50.968: INFO: rc: 1
Feb 11 14:52:50.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032fe090 exit status 1   true [0xc002020000 0xc002020020 0xc002020038] [0xc002020000 0xc002020020 0xc002020038] [0xc002020010 0xc002020030] [0xba6c50 0xba6c50] 0xc001865620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:53:00.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:53:01.202: INFO: rc: 1
Feb 11 14:53:01.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00161e2a0 exit status 1   true [0xc000728dd8 0xc000728fb0 0xc000729038] [0xc000728dd8 0xc000728fb0 0xc000729038] [0xc000728f08 0xc000728fe8] [0xba6c50 0xba6c50] 0xc0033655c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:53:11.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:53:11.379: INFO: rc: 1
Feb 11 14:53:11.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c1b0 exit status 1   true [0xc000722980 0xc000722a88 0xc000722c58] [0xc000722980 0xc000722a88 0xc000722c58] [0xc000722a58 0xc000722c40] [0xba6c50 0xba6c50] 0xc002cae120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:53:21.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:53:21.582: INFO: rc: 1
Feb 11 14:53:21.583: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c270 exit status 1   true [0xc000722c78 0xc000722d60 0xc000722df8] [0xc000722c78 0xc000722d60 0xc000722df8] [0xc000722d00 0xc000722dc8] [0xba6c50 0xba6c50] 0xc002cae720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:53:31.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:53:31.741: INFO: rc: 1
Feb 11 14:53:31.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c330 exit status 1   true [0xc000722e48 0xc000723128 0xc000723498] [0xc000722e48 0xc000723128 0xc000723498] [0xc000722f60 0xc000723380] [0xba6c50 0xba6c50] 0xc002caed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:53:41.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:53:41.959: INFO: rc: 1
Feb 11 14:53:41.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c3f0 exit status 1   true [0xc0007234d8 0xc0007235d8 0xc000723710] [0xc0007234d8 0xc0007235d8 0xc000723710] [0xc0007235a0 0xc000723700] [0xba6c50 0xba6c50] 0xc001e82060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:53:51.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:53:52.139: INFO: rc: 1
Feb 11 14:53:52.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c4b0 exit status 1   true [0xc000723790 0xc0007238e8 0xc000723960] [0xc000723790 0xc0007238e8 0xc000723960] [0xc000723830 0xc000723950] [0xba6c50 0xba6c50] 0xc001e83d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:54:02.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:54:02.347: INFO: rc: 1
Feb 11 14:54:02.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c570 exit status 1   true [0xc000723968 0xc000723aa0 0xc000723b00] [0xc000723968 0xc000723aa0 0xc000723b00] [0xc000723a60 0xc000723af8] [0xba6c50 0xba6c50] 0xc0027a6240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:54:12.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:54:12.581: INFO: rc: 1
Feb 11 14:54:12.583: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0035d0090 exit status 1   true [0xc002112008 0xc002112020 0xc002112038] [0xc002112008 0xc002112020 0xc002112038] [0xc002112018 0xc002112030] [0xba6c50 0xba6c50] 0xc001e82600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:54:22.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:54:22.722: INFO: rc: 1
Feb 11 14:54:22.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0035d0180 exit status 1   true [0xc002112040 0xc002112058 0xc002112080] [0xc002112040 0xc002112058 0xc002112080] [0xc002112050 0xc002112068] [0xba6c50 0xba6c50] 0xc002cae300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:54:32.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:54:32.859: INFO: rc: 1
Feb 11 14:54:32.859: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032fe0c0 exit status 1   true [0xc000722038 0xc000722300 0xc0007226f0] [0xc000722038 0xc000722300 0xc0007226f0] [0xc000722290 0xc000722568] [0xba6c50 0xba6c50] 0xc002358de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:54:42.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:54:43.047: INFO: rc: 1
Feb 11 14:54:43.048: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00161e0f0 exit status 1   true [0xc002020000 0xc002020020 0xc002020038] [0xc002020000 0xc002020020 0xc002020038] [0xc002020010 0xc002020030] [0xba6c50 0xba6c50] 0xc001a929c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:54:53.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:54:53.231: INFO: rc: 1
Feb 11 14:54:53.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0035d0270 exit status 1   true [0xc002112090 0xc0021120b8 0xc0021120d0] [0xc002112090 0xc0021120b8 0xc0021120d0] [0xc0021120b0 0xc0021120c8] [0xba6c50 0xba6c50] 0xc002cae840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:55:03.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:55:03.395: INFO: rc: 1
Feb 11 14:55:03.396: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032fe180 exit status 1   true [0xc000722980 0xc000722a88 0xc000722c58] [0xc000722980 0xc000722a88 0xc000722c58] [0xc000722a58 0xc000722c40] [0xba6c50 0xba6c50] 0xc0027a6060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:55:13.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:55:13.580: INFO: rc: 1
Feb 11 14:55:13.580: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032fe240 exit status 1   true [0xc000722c78 0xc000722d60 0xc000722df8] [0xc000722c78 0xc000722d60 0xc000722df8] [0xc000722d00 0xc000722dc8] [0xba6c50 0xba6c50] 0xc0027a63c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:55:23.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:55:23.757: INFO: rc: 1
Feb 11 14:55:23.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c180 exit status 1   true [0xc000728250 0xc000728490 0xc000728638] [0xc000728250 0xc000728490 0xc000728638] [0xc000728458 0xc0007285e0] [0xba6c50 0xba6c50] 0xc001865620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:55:33.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:55:33.941: INFO: rc: 1
Feb 11 14:55:33.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0035d0390 exit status 1   true [0xc0021120d8 0xc0021120f0 0xc002112108] [0xc0021120d8 0xc0021120f0 0xc002112108] [0xc0021120e8 0xc002112100] [0xba6c50 0xba6c50] 0xc002caee40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:55:43.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:55:44.107: INFO: rc: 1
Feb 11 14:55:44.107: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032fe330 exit status 1   true [0xc000722e48 0xc000723128 0xc000723498] [0xc000722e48 0xc000723128 0xc000723498] [0xc000722f60 0xc000723380] [0xba6c50 0xba6c50] 0xc0027a66c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:55:54.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:55:54.259: INFO: rc: 1
Feb 11 14:55:54.260: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c2a0 exit status 1   true [0xc0007286a0 0xc0007287d8 0xc000728d68] [0xc0007286a0 0xc0007287d8 0xc000728d68] [0xc000728770 0xc000728d48] [0xba6c50 0xba6c50] 0xc001865f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:56:04.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:56:04.424: INFO: rc: 1
Feb 11 14:56:04.424: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c390 exit status 1   true [0xc000728d80 0xc000728f08 0xc000728fe8] [0xc000728d80 0xc000728f08 0xc000728fe8] [0xc000728e70 0xc000728fd8] [0xba6c50 0xba6c50] 0xc003364f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:56:14.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:56:14.683: INFO: rc: 1
Feb 11 14:56:14.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00161e090 exit status 1   true [0xc002020008 0xc002020028 0xc002020040] [0xc002020008 0xc002020028 0xc002020040] [0xc002020020 0xc002020038] [0xba6c50 0xba6c50] 0xc001865620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:56:24.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:56:24.841: INFO: rc: 1
Feb 11 14:56:24.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032fe0f0 exit status 1   true [0xc000722038 0xc000722300 0xc0007226f0] [0xc000722038 0xc000722300 0xc0007226f0] [0xc000722290 0xc000722568] [0xba6c50 0xba6c50] 0xc002358de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:56:34.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:56:35.077: INFO: rc: 1
Feb 11 14:56:35.077: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00272c0f0 exit status 1   true [0xc000728250 0xc000728490 0xc000728638] [0xc000728250 0xc000728490 0xc000728638] [0xc000728458 0xc0007285e0] [0xba6c50 0xba6c50] 0xc001e82600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 11 14:56:45.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 11 14:56:45.341: INFO: rc: 1
Feb 11 14:56:45.341: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 11 14:56:45.341: INFO: Scaling statefulset ss to 0
Feb 11 14:56:45.358: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 11 14:56:45.361: INFO: Deleting all statefulset in ns statefulset-5917
Feb 11 14:56:45.367: INFO: Scaling statefulset ss to 0
Feb 11 14:56:45.373: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 14:56:45.374: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:56:45.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5917" for this suite.
Feb 11 14:56:51.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:56:51.624: INFO: namespace statefulset-5917 deletion completed in 6.145676998s

• [SLOW TEST:368.683 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:56:51.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:57:45.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-833" for this suite.
Feb 11 14:57:51.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:57:51.644: INFO: namespace container-runtime-833 deletion completed in 6.179705567s

• [SLOW TEST:60.020 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:57:51.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-b7bea8a9-85c3-4093-b631-aa23e25be53a
STEP: Creating a pod to test consume secrets
Feb 11 14:57:51.781: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c" in namespace "projected-3625" to be "success or failure"
Feb 11 14:57:51.791: INFO: Pod "pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.536077ms
Feb 11 14:57:53.800: INFO: Pod "pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018985206s
Feb 11 14:57:55.837: INFO: Pod "pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055909557s
Feb 11 14:57:57.844: INFO: Pod "pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062666425s
Feb 11 14:57:59.853: INFO: Pod "pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071581213s
STEP: Saw pod success
Feb 11 14:57:59.853: INFO: Pod "pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c" satisfied condition "success or failure"
Feb 11 14:57:59.858: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 14:57:59.997: INFO: Waiting for pod pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c to disappear
Feb 11 14:58:00.008: INFO: Pod pod-projected-secrets-b5916e9c-fb6d-451c-8173-62c073e7161c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:58:00.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3625" for this suite.
Feb 11 14:58:06.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:58:06.178: INFO: namespace projected-3625 deletion completed in 6.157696802s

• [SLOW TEST:14.533 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:58:06.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 11 14:58:06.353: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:58:19.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1245" for this suite.
Feb 11 14:58:25.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:58:25.201: INFO: namespace init-container-1245 deletion completed in 6.163134822s

• [SLOW TEST:19.023 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:58:25.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 11 14:58:32.600: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:58:32.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5869" for this suite.
Feb 11 14:58:38.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:58:38.929: INFO: namespace container-runtime-5869 deletion completed in 6.185149555s

• [SLOW TEST:13.727 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:58:38.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-jn8l
STEP: Creating a pod to test atomic-volume-subpath
Feb 11 14:58:39.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jn8l" in namespace "subpath-310" to be "success or failure"
Feb 11 14:58:39.166: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Pending", Reason="", readiness=false. Elapsed: 36.743364ms
Feb 11 14:58:41.301: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17159582s
Feb 11 14:58:43.315: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185605569s
Feb 11 14:58:45.325: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195131247s
Feb 11 14:58:47.335: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205761652s
Feb 11 14:58:49.346: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 10.216803967s
Feb 11 14:58:51.355: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 12.225076647s
Feb 11 14:58:53.364: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 14.234778166s
Feb 11 14:58:55.377: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 16.247122282s
Feb 11 14:58:57.389: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 18.259093381s
Feb 11 14:58:59.398: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 20.268652116s
Feb 11 14:59:01.408: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 22.278826464s
Feb 11 14:59:03.419: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 24.289453873s
Feb 11 14:59:05.698: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 26.56807728s
Feb 11 14:59:07.706: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Running", Reason="", readiness=true. Elapsed: 28.576401001s
Feb 11 14:59:09.714: INFO: Pod "pod-subpath-test-projected-jn8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.584438959s
STEP: Saw pod success
Feb 11 14:59:09.714: INFO: Pod "pod-subpath-test-projected-jn8l" satisfied condition "success or failure"
Feb 11 14:59:09.718: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-jn8l container test-container-subpath-projected-jn8l: 
STEP: delete the pod
Feb 11 14:59:09.776: INFO: Waiting for pod pod-subpath-test-projected-jn8l to disappear
Feb 11 14:59:09.786: INFO: Pod pod-subpath-test-projected-jn8l no longer exists
STEP: Deleting pod pod-subpath-test-projected-jn8l
Feb 11 14:59:09.786: INFO: Deleting pod "pod-subpath-test-projected-jn8l" in namespace "subpath-310"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:59:09.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-310" for this suite.
Feb 11 14:59:15.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:59:16.035: INFO: namespace subpath-310 deletion completed in 6.227878955s

• [SLOW TEST:37.105 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:59:16.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 11 14:59:16.127: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 14:59:36.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1378" for this suite.
Feb 11 14:59:42.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 14:59:42.873: INFO: namespace pods-1378 deletion completed in 6.225786654s

• [SLOW TEST:26.838 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 14:59:42.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 11 14:59:43.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9828'
Feb 11 14:59:45.331: INFO: stderr: ""
Feb 11 14:59:45.331: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 14:59:45.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9828'
Feb 11 14:59:45.556: INFO: stderr: ""
Feb 11 14:59:45.556: INFO: stdout: "update-demo-nautilus-p7nmh update-demo-nautilus-sx86f "
Feb 11 14:59:45.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p7nmh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 14:59:45.673: INFO: stderr: ""
Feb 11 14:59:45.673: INFO: stdout: ""
Feb 11 14:59:45.673: INFO: update-demo-nautilus-p7nmh is created but not running
Feb 11 14:59:50.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9828'
Feb 11 14:59:51.728: INFO: stderr: ""
Feb 11 14:59:51.728: INFO: stdout: "update-demo-nautilus-p7nmh update-demo-nautilus-sx86f "
Feb 11 14:59:51.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p7nmh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 14:59:52.011: INFO: stderr: ""
Feb 11 14:59:52.011: INFO: stdout: ""
Feb 11 14:59:52.012: INFO: update-demo-nautilus-p7nmh is created but not running
Feb 11 14:59:57.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9828'
Feb 11 14:59:57.180: INFO: stderr: ""
Feb 11 14:59:57.180: INFO: stdout: "update-demo-nautilus-p7nmh update-demo-nautilus-sx86f "
Feb 11 14:59:57.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p7nmh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 14:59:57.356: INFO: stderr: ""
Feb 11 14:59:57.356: INFO: stdout: "true"
Feb 11 14:59:57.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p7nmh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 14:59:57.510: INFO: stderr: ""
Feb 11 14:59:57.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:59:57.510: INFO: validating pod update-demo-nautilus-p7nmh
Feb 11 14:59:57.533: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:59:57.533: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:59:57.533: INFO: update-demo-nautilus-p7nmh is verified up and running
Feb 11 14:59:57.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx86f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 14:59:57.681: INFO: stderr: ""
Feb 11 14:59:57.681: INFO: stdout: "true"
Feb 11 14:59:57.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx86f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 14:59:57.808: INFO: stderr: ""
Feb 11 14:59:57.808: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 14:59:57.808: INFO: validating pod update-demo-nautilus-sx86f
Feb 11 14:59:57.814: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 14:59:57.814: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 14:59:57.814: INFO: update-demo-nautilus-sx86f is verified up and running
STEP: rolling-update to new replication controller
Feb 11 14:59:57.816: INFO: scanned /root for discovery docs: 
Feb 11 14:59:57.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9828'
Feb 11 15:00:26.970: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 11 15:00:26.971: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 15:00:26.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9828'
Feb 11 15:00:27.152: INFO: stderr: ""
Feb 11 15:00:27.152: INFO: stdout: "update-demo-kitten-btxzn update-demo-kitten-d89qx "
Feb 11 15:00:27.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-btxzn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 15:00:27.314: INFO: stderr: ""
Feb 11 15:00:27.315: INFO: stdout: "true"
Feb 11 15:00:27.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-btxzn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 15:00:27.427: INFO: stderr: ""
Feb 11 15:00:27.427: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 11 15:00:27.427: INFO: validating pod update-demo-kitten-btxzn
Feb 11 15:00:27.445: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 11 15:00:27.445: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 11 15:00:27.445: INFO: update-demo-kitten-btxzn is verified up and running
Feb 11 15:00:27.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d89qx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 15:00:27.566: INFO: stderr: ""
Feb 11 15:00:27.566: INFO: stdout: "true"
Feb 11 15:00:27.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-d89qx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9828'
Feb 11 15:00:27.691: INFO: stderr: ""
Feb 11 15:00:27.691: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 11 15:00:27.691: INFO: validating pod update-demo-kitten-d89qx
Feb 11 15:00:27.713: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 11 15:00:27.713: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 11 15:00:27.713: INFO: update-demo-kitten-d89qx is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:00:27.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9828" for this suite.
Feb 11 15:00:51.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:00:51.925: INFO: namespace kubectl-9828 deletion completed in 24.207159787s

• [SLOW TEST:69.049 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:00:51.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 11 15:01:00.622: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9392 pod-service-account-b0a28000-1c19-43fe-a5a7-33e0d548b6d6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 11 15:01:01.163: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9392 pod-service-account-b0a28000-1c19-43fe-a5a7-33e0d548b6d6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 11 15:01:01.577: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9392 pod-service-account-b0a28000-1c19-43fe-a5a7-33e0d548b6d6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:01:02.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9392" for this suite.
Feb 11 15:01:10.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:01:10.236: INFO: namespace svcaccounts-9392 deletion completed in 8.167701035s

• [SLOW TEST:18.311 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:01:10.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 11 15:01:10.433: INFO: Number of nodes with available pods: 0
Feb 11 15:01:10.434: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:11.727: INFO: Number of nodes with available pods: 0
Feb 11 15:01:11.727: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:12.447: INFO: Number of nodes with available pods: 0
Feb 11 15:01:12.447: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:13.453: INFO: Number of nodes with available pods: 0
Feb 11 15:01:13.453: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:14.455: INFO: Number of nodes with available pods: 0
Feb 11 15:01:14.456: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:16.182: INFO: Number of nodes with available pods: 0
Feb 11 15:01:16.182: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:16.875: INFO: Number of nodes with available pods: 0
Feb 11 15:01:16.875: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:18.117: INFO: Number of nodes with available pods: 0
Feb 11 15:01:18.118: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:19.049: INFO: Number of nodes with available pods: 0
Feb 11 15:01:19.050: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:20.037: INFO: Number of nodes with available pods: 0
Feb 11 15:01:20.038: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:20.523: INFO: Number of nodes with available pods: 0
Feb 11 15:01:20.524: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:21.451: INFO: Number of nodes with available pods: 1
Feb 11 15:01:21.451: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:22.474: INFO: Number of nodes with available pods: 1
Feb 11 15:01:22.474: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:23.455: INFO: Number of nodes with available pods: 2
Feb 11 15:01:23.455: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 11 15:01:23.517: INFO: Number of nodes with available pods: 1
Feb 11 15:01:23.517: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:24.551: INFO: Number of nodes with available pods: 1
Feb 11 15:01:24.551: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:25.532: INFO: Number of nodes with available pods: 1
Feb 11 15:01:25.532: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:26.563: INFO: Number of nodes with available pods: 1
Feb 11 15:01:26.564: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:27.541: INFO: Number of nodes with available pods: 1
Feb 11 15:01:27.541: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:28.548: INFO: Number of nodes with available pods: 1
Feb 11 15:01:28.548: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:29.543: INFO: Number of nodes with available pods: 1
Feb 11 15:01:29.543: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:30.554: INFO: Number of nodes with available pods: 1
Feb 11 15:01:30.554: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:31.537: INFO: Number of nodes with available pods: 1
Feb 11 15:01:31.537: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:32.541: INFO: Number of nodes with available pods: 1
Feb 11 15:01:32.541: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:33.539: INFO: Number of nodes with available pods: 1
Feb 11 15:01:33.539: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:34.565: INFO: Number of nodes with available pods: 1
Feb 11 15:01:34.566: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:35.534: INFO: Number of nodes with available pods: 1
Feb 11 15:01:35.534: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:36.601: INFO: Number of nodes with available pods: 1
Feb 11 15:01:36.601: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:37.539: INFO: Number of nodes with available pods: 1
Feb 11 15:01:37.539: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:38.538: INFO: Number of nodes with available pods: 1
Feb 11 15:01:38.539: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:39.559: INFO: Number of nodes with available pods: 1
Feb 11 15:01:39.559: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:40.550: INFO: Number of nodes with available pods: 1
Feb 11 15:01:40.550: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:41.539: INFO: Number of nodes with available pods: 1
Feb 11 15:01:41.539: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:42.547: INFO: Number of nodes with available pods: 1
Feb 11 15:01:42.548: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:01:43.537: INFO: Number of nodes with available pods: 2
Feb 11 15:01:43.537: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5227, will wait for the garbage collector to delete the pods
Feb 11 15:01:43.628: INFO: Deleting DaemonSet.extensions daemon-set took: 30.310503ms
Feb 11 15:01:43.929: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.310169ms
Feb 11 15:01:57.938: INFO: Number of nodes with available pods: 0
Feb 11 15:01:57.938: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 15:01:57.943: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5227/daemonsets","resourceVersion":"23962568"},"items":null}

Feb 11 15:01:57.947: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5227/pods","resourceVersion":"23962568"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:01:57.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5227" for this suite.
Feb 11 15:02:03.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:02:04.093: INFO: namespace daemonsets-5227 deletion completed in 6.128218518s

• [SLOW TEST:53.854 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:02:04.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 11 15:02:04.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5193'
Feb 11 15:02:04.803: INFO: stderr: ""
Feb 11 15:02:04.803: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 11 15:02:05.820: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:05.821: INFO: Found 0 / 1
Feb 11 15:02:06.814: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:06.814: INFO: Found 0 / 1
Feb 11 15:02:07.848: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:07.849: INFO: Found 0 / 1
Feb 11 15:02:08.813: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:08.813: INFO: Found 0 / 1
Feb 11 15:02:09.823: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:09.823: INFO: Found 0 / 1
Feb 11 15:02:10.817: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:10.817: INFO: Found 0 / 1
Feb 11 15:02:11.879: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:11.880: INFO: Found 1 / 1
Feb 11 15:02:11.880: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 11 15:02:11.893: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:11.893: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 11 15:02:11.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-zdshz --namespace=kubectl-5193 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 11 15:02:12.080: INFO: stderr: ""
Feb 11 15:02:12.080: INFO: stdout: "pod/redis-master-zdshz patched\n"
STEP: checking annotations
Feb 11 15:02:12.089: INFO: Selector matched 1 pods for map[app:redis]
Feb 11 15:02:12.089: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:02:12.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5193" for this suite.
Feb 11 15:02:34.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:02:34.262: INFO: namespace kubectl-5193 deletion completed in 22.167189005s

• [SLOW TEST:30.167 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:02:34.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 11 15:02:34.348: INFO: Waiting up to 5m0s for pod "downward-api-08b4677a-39f1-43c8-97aa-497410724d31" in namespace "downward-api-9179" to be "success or failure"
Feb 11 15:02:34.372: INFO: Pod "downward-api-08b4677a-39f1-43c8-97aa-497410724d31": Phase="Pending", Reason="", readiness=false. Elapsed: 24.573358ms
Feb 11 15:02:36.386: INFO: Pod "downward-api-08b4677a-39f1-43c8-97aa-497410724d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038309766s
Feb 11 15:02:38.633: INFO: Pod "downward-api-08b4677a-39f1-43c8-97aa-497410724d31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285165045s
Feb 11 15:02:40.647: INFO: Pod "downward-api-08b4677a-39f1-43c8-97aa-497410724d31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299274403s
Feb 11 15:02:42.661: INFO: Pod "downward-api-08b4677a-39f1-43c8-97aa-497410724d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.313364967s
STEP: Saw pod success
Feb 11 15:02:42.662: INFO: Pod "downward-api-08b4677a-39f1-43c8-97aa-497410724d31" satisfied condition "success or failure"
Feb 11 15:02:42.667: INFO: Trying to get logs from node iruya-node pod downward-api-08b4677a-39f1-43c8-97aa-497410724d31 container dapi-container: 
STEP: delete the pod
Feb 11 15:02:42.834: INFO: Waiting for pod downward-api-08b4677a-39f1-43c8-97aa-497410724d31 to disappear
Feb 11 15:02:42.846: INFO: Pod downward-api-08b4677a-39f1-43c8-97aa-497410724d31 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:02:42.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9179" for this suite.
Feb 11 15:02:49.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:02:49.277: INFO: namespace downward-api-9179 deletion completed in 6.420814836s

• [SLOW TEST:15.014 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:02:49.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 11 15:02:49.382: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:02:49.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2906" for this suite.
Feb 11 15:02:55.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:02:55.678: INFO: namespace kubectl-2906 deletion completed in 6.168079786s

• [SLOW TEST:6.400 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:02:55.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 11 15:03:06.420: INFO: Successfully updated pod "labelsupdated9605067-cb4d-436c-9349-9619f2d1d604"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:03:08.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3455" for this suite.
Feb 11 15:03:32.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:03:32.708: INFO: namespace downward-api-3455 deletion completed in 24.177220414s

• [SLOW TEST:37.029 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:03:32.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 11 15:03:41.678: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4894ea0d-b748-4b91-9e05-c7b7f6f07486"
Feb 11 15:03:41.679: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4894ea0d-b748-4b91-9e05-c7b7f6f07486" in namespace "pods-4283" to be "terminated due to deadline exceeded"
Feb 11 15:03:41.727: INFO: Pod "pod-update-activedeadlineseconds-4894ea0d-b748-4b91-9e05-c7b7f6f07486": Phase="Running", Reason="", readiness=true. Elapsed: 47.681389ms
Feb 11 15:03:43.740: INFO: Pod "pod-update-activedeadlineseconds-4894ea0d-b748-4b91-9e05-c7b7f6f07486": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.061297125s
Feb 11 15:03:43.741: INFO: Pod "pod-update-activedeadlineseconds-4894ea0d-b748-4b91-9e05-c7b7f6f07486" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:03:43.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4283" for this suite.
Feb 11 15:03:49.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:03:49.921: INFO: namespace pods-4283 deletion completed in 6.171505972s

• [SLOW TEST:17.213 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:03:49.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 11 15:03:50.052: INFO: Waiting up to 5m0s for pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51" in namespace "emptydir-9265" to be "success or failure"
Feb 11 15:03:50.066: INFO: Pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51": Phase="Pending", Reason="", readiness=false. Elapsed: 12.946829ms
Feb 11 15:03:52.075: INFO: Pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022208864s
Feb 11 15:03:54.091: INFO: Pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038238173s
Feb 11 15:03:56.102: INFO: Pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049338668s
Feb 11 15:03:58.112: INFO: Pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059642295s
Feb 11 15:04:00.121: INFO: Pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06843308s
STEP: Saw pod success
Feb 11 15:04:00.121: INFO: Pod "pod-d4a11921-8908-40c0-949b-efdb79f5df51" satisfied condition "success or failure"
Feb 11 15:04:00.127: INFO: Trying to get logs from node iruya-node pod pod-d4a11921-8908-40c0-949b-efdb79f5df51 container test-container: 
STEP: delete the pod
Feb 11 15:04:00.235: INFO: Waiting for pod pod-d4a11921-8908-40c0-949b-efdb79f5df51 to disappear
Feb 11 15:04:00.260: INFO: Pod pod-d4a11921-8908-40c0-949b-efdb79f5df51 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:04:00.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9265" for this suite.
Feb 11 15:04:06.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:04:06.645: INFO: namespace emptydir-9265 deletion completed in 6.361973351s

• [SLOW TEST:16.724 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:04:06.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:04:36.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7615" for this suite.
Feb 11 15:04:43.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:04:43.158: INFO: namespace namespaces-7615 deletion completed in 6.154040192s
STEP: Destroying namespace "nsdeletetest-8375" for this suite.
Feb 11 15:04:43.160: INFO: Namespace nsdeletetest-8375 was already deleted
STEP: Destroying namespace "nsdeletetest-8263" for this suite.
Feb 11 15:04:49.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:04:49.333: INFO: namespace nsdeletetest-8263 deletion completed in 6.173694695s

• [SLOW TEST:42.687 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:04:49.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-97c9f9db-b0af-4559-aa36-ddf0cd7f5aa3
STEP: Creating a pod to test consume configMaps
Feb 11 15:04:49.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668" in namespace "projected-7095" to be "success or failure"
Feb 11 15:04:49.479: INFO: Pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668": Phase="Pending", Reason="", readiness=false. Elapsed: 47.177073ms
Feb 11 15:04:51.489: INFO: Pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05699461s
Feb 11 15:04:53.501: INFO: Pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069502496s
Feb 11 15:04:55.511: INFO: Pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079508799s
Feb 11 15:04:57.522: INFO: Pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089753859s
Feb 11 15:04:59.534: INFO: Pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10246369s
STEP: Saw pod success
Feb 11 15:04:59.535: INFO: Pod "pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668" satisfied condition "success or failure"
Feb 11 15:04:59.540: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 15:04:59.750: INFO: Waiting for pod pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668 to disappear
Feb 11 15:04:59.755: INFO: Pod pod-projected-configmaps-9ff67ec1-5f5d-4dfb-81da-7fbe7d831668 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:04:59.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7095" for this suite.
Feb 11 15:05:05.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:05:05.981: INFO: namespace projected-7095 deletion completed in 6.219322307s

• [SLOW TEST:16.647 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:05:05.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 11 15:05:06.089: INFO: Waiting up to 5m0s for pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e" in namespace "emptydir-9511" to be "success or failure"
Feb 11 15:05:06.117: INFO: Pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.670648ms
Feb 11 15:05:08.130: INFO: Pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041008782s
Feb 11 15:05:10.141: INFO: Pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052085259s
Feb 11 15:05:12.150: INFO: Pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06132203s
Feb 11 15:05:14.159: INFO: Pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069700228s
Feb 11 15:05:16.171: INFO: Pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082189273s
STEP: Saw pod success
Feb 11 15:05:16.171: INFO: Pod "pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e" satisfied condition "success or failure"
Feb 11 15:05:16.176: INFO: Trying to get logs from node iruya-node pod pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e container test-container: 
STEP: delete the pod
Feb 11 15:05:16.241: INFO: Waiting for pod pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e to disappear
Feb 11 15:05:16.258: INFO: Pod pod-4272f2fe-287c-4ebf-bb7e-f59c929fc04e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:05:16.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9511" for this suite.
Feb 11 15:05:22.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:05:22.522: INFO: namespace emptydir-9511 deletion completed in 6.256670753s

• [SLOW TEST:16.540 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:05:22.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 11 15:05:30.798: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:05:30.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3405" for this suite.
Feb 11 15:05:36.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:05:37.065: INFO: namespace container-runtime-3405 deletion completed in 6.222243137s

• [SLOW TEST:14.542 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:05:37.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 15:05:37.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32" in namespace "projected-8469" to be "success or failure"
Feb 11 15:05:37.229: INFO: Pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32": Phase="Pending", Reason="", readiness=false. Elapsed: 13.972424ms
Feb 11 15:05:39.241: INFO: Pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026779881s
Feb 11 15:05:41.258: INFO: Pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043521079s
Feb 11 15:05:43.268: INFO: Pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053708476s
Feb 11 15:05:45.279: INFO: Pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06424787s
Feb 11 15:05:47.291: INFO: Pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076593603s
STEP: Saw pod success
Feb 11 15:05:47.291: INFO: Pod "downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32" satisfied condition "success or failure"
Feb 11 15:05:47.299: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32 container client-container: 
STEP: delete the pod
Feb 11 15:05:47.358: INFO: Waiting for pod downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32 to disappear
Feb 11 15:05:47.364: INFO: Pod downwardapi-volume-b4c4a79b-a177-4929-86c1-514d80048c32 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:05:47.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8469" for this suite.
Feb 11 15:05:53.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:05:53.640: INFO: namespace projected-8469 deletion completed in 6.266408447s

• [SLOW TEST:16.574 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:05:53.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 11 15:05:53.914: INFO: Number of nodes with available pods: 0
Feb 11 15:05:53.914: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:05:55.297: INFO: Number of nodes with available pods: 0
Feb 11 15:05:55.297: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:05:56.010: INFO: Number of nodes with available pods: 0
Feb 11 15:05:56.011: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:05:56.928: INFO: Number of nodes with available pods: 0
Feb 11 15:05:56.928: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:05:57.951: INFO: Number of nodes with available pods: 0
Feb 11 15:05:57.951: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:05:58.926: INFO: Number of nodes with available pods: 0
Feb 11 15:05:58.926: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:06:01.479: INFO: Number of nodes with available pods: 0
Feb 11 15:06:01.479: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:06:01.934: INFO: Number of nodes with available pods: 0
Feb 11 15:06:01.934: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:06:02.928: INFO: Number of nodes with available pods: 0
Feb 11 15:06:02.928: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:06:03.952: INFO: Number of nodes with available pods: 0
Feb 11 15:06:03.952: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:06:04.936: INFO: Number of nodes with available pods: 1
Feb 11 15:06:04.936: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:06:05.931: INFO: Number of nodes with available pods: 2
Feb 11 15:06:05.931: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 11 15:06:06.007: INFO: Number of nodes with available pods: 1
Feb 11 15:06:06.007: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:07.020: INFO: Number of nodes with available pods: 1
Feb 11 15:06:07.020: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:08.046: INFO: Number of nodes with available pods: 1
Feb 11 15:06:08.047: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:09.027: INFO: Number of nodes with available pods: 1
Feb 11 15:06:09.027: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:10.019: INFO: Number of nodes with available pods: 1
Feb 11 15:06:10.019: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:11.614: INFO: Number of nodes with available pods: 1
Feb 11 15:06:11.614: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:12.096: INFO: Number of nodes with available pods: 1
Feb 11 15:06:12.096: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:13.024: INFO: Number of nodes with available pods: 1
Feb 11 15:06:13.024: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:14.027: INFO: Number of nodes with available pods: 1
Feb 11 15:06:14.027: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 11 15:06:15.028: INFO: Number of nodes with available pods: 2
Feb 11 15:06:15.028: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4653, will wait for the garbage collector to delete the pods
Feb 11 15:06:15.111: INFO: Deleting DaemonSet.extensions daemon-set took: 23.50894ms
Feb 11 15:06:15.513: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.443854ms
Feb 11 15:06:22.233: INFO: Number of nodes with available pods: 0
Feb 11 15:06:22.233: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 15:06:22.236: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4653/daemonsets","resourceVersion":"23963277"},"items":null}

Feb 11 15:06:22.239: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4653/pods","resourceVersion":"23963277"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:06:22.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4653" for this suite.
Feb 11 15:06:28.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:06:28.494: INFO: namespace daemonsets-4653 deletion completed in 6.239596834s

• [SLOW TEST:34.853 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:06:28.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:06:40.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3368" for this suite.
Feb 11 15:06:46.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:06:46.823: INFO: namespace kubelet-test-3368 deletion completed in 6.169379179s

• [SLOW TEST:18.326 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:06:46.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 11 15:06:46.925: INFO: Waiting up to 5m0s for pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b" in namespace "emptydir-5693" to be "success or failure"
Feb 11 15:06:46.953: INFO: Pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.957242ms
Feb 11 15:06:48.975: INFO: Pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050137534s
Feb 11 15:06:51.027: INFO: Pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102547509s
Feb 11 15:06:53.039: INFO: Pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114488389s
Feb 11 15:06:55.096: INFO: Pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171210038s
Feb 11 15:06:57.105: INFO: Pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.180572405s
STEP: Saw pod success
Feb 11 15:06:57.106: INFO: Pod "pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b" satisfied condition "success or failure"
Feb 11 15:06:57.111: INFO: Trying to get logs from node iruya-node pod pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b container test-container: 
STEP: delete the pod
Feb 11 15:06:57.162: INFO: Waiting for pod pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b to disappear
Feb 11 15:06:57.222: INFO: Pod pod-2f2cd28f-dec5-4fb0-9d23-556dcbfa8f2b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:06:57.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5693" for this suite.
Feb 11 15:07:03.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:07:03.413: INFO: namespace emptydir-5693 deletion completed in 6.182827422s

• [SLOW TEST:16.589 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:07:03.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 11 15:07:03.567: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 11 15:07:03.600: INFO: Number of nodes with available pods: 0
Feb 11 15:07:03.600: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:05.162: INFO: Number of nodes with available pods: 0
Feb 11 15:07:05.162: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:05.677: INFO: Number of nodes with available pods: 0
Feb 11 15:07:05.677: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:06.628: INFO: Number of nodes with available pods: 0
Feb 11 15:07:06.628: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:07.617: INFO: Number of nodes with available pods: 0
Feb 11 15:07:07.617: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:09.309: INFO: Number of nodes with available pods: 0
Feb 11 15:07:09.309: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:10.025: INFO: Number of nodes with available pods: 0
Feb 11 15:07:10.025: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:11.550: INFO: Number of nodes with available pods: 0
Feb 11 15:07:11.550: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:11.849: INFO: Number of nodes with available pods: 0
Feb 11 15:07:11.849: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:12.633: INFO: Number of nodes with available pods: 0
Feb 11 15:07:12.633: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:13.621: INFO: Number of nodes with available pods: 2
Feb 11 15:07:13.621: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 11 15:07:13.794: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:13.794: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:14.848: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:14.848: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:15.850: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:15.850: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:16.859: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:16.859: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:17.854: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:17.854: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:18.855: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:18.855: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:19.851: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:19.851: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:19.851: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:20.858: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:20.859: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:20.859: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:21.850: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:21.850: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:21.850: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:22.856: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:22.857: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:22.857: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:23.864: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:23.864: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:23.864: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:24.860: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:24.861: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:24.861: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:25.845: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:25.845: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:25.845: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:26.849: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:26.850: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:26.850: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:27.933: INFO: Wrong image for pod: daemon-set-k9qnn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:27.933: INFO: Pod daemon-set-k9qnn is not available
Feb 11 15:07:27.933: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:28.851: INFO: Pod daemon-set-584x2 is not available
Feb 11 15:07:28.851: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:29.854: INFO: Pod daemon-set-584x2 is not available
Feb 11 15:07:29.854: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:30.859: INFO: Pod daemon-set-584x2 is not available
Feb 11 15:07:30.859: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:32.023: INFO: Pod daemon-set-584x2 is not available
Feb 11 15:07:32.023: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:32.913: INFO: Pod daemon-set-584x2 is not available
Feb 11 15:07:32.913: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:33.864: INFO: Pod daemon-set-584x2 is not available
Feb 11 15:07:33.864: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:34.854: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:35.852: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:36.848: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:37.849: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:38.848: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:38.848: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:39.853: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:39.853: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:40.850: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:40.850: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:41.851: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:41.851: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:42.849: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:42.850: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:43.852: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:43.853: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:44.849: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:44.849: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:45.854: INFO: Wrong image for pod: daemon-set-tn7tt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 11 15:07:45.854: INFO: Pod daemon-set-tn7tt is not available
Feb 11 15:07:46.857: INFO: Pod daemon-set-k7nk8 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 11 15:07:46.880: INFO: Number of nodes with available pods: 1
Feb 11 15:07:46.880: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:47.901: INFO: Number of nodes with available pods: 1
Feb 11 15:07:47.901: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:48.901: INFO: Number of nodes with available pods: 1
Feb 11 15:07:48.901: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:49.911: INFO: Number of nodes with available pods: 1
Feb 11 15:07:49.911: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:50.916: INFO: Number of nodes with available pods: 1
Feb 11 15:07:50.916: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:51.902: INFO: Number of nodes with available pods: 1
Feb 11 15:07:51.902: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:52.899: INFO: Number of nodes with available pods: 1
Feb 11 15:07:52.899: INFO: Node iruya-node is running more than one daemon pod
Feb 11 15:07:53.908: INFO: Number of nodes with available pods: 2
Feb 11 15:07:53.908: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8828, will wait for the garbage collector to delete the pods
Feb 11 15:07:54.013: INFO: Deleting DaemonSet.extensions daemon-set took: 14.236478ms
Feb 11 15:07:54.314: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.777341ms
Feb 11 15:08:07.924: INFO: Number of nodes with available pods: 0
Feb 11 15:08:07.924: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 15:08:07.928: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8828/daemonsets","resourceVersion":"23963566"},"items":null}

Feb 11 15:08:07.934: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8828/pods","resourceVersion":"23963566"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:08:07.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8828" for this suite.
Feb 11 15:08:13.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:08:14.127: INFO: namespace daemonsets-8828 deletion completed in 6.174771617s

• [SLOW TEST:70.713 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:08:14.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-c6ffde0a-0f95-472c-af8f-d76661e50f18
STEP: Creating a pod to test consume configMaps
Feb 11 15:08:14.246: INFO: Waiting up to 5m0s for pod "pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f" in namespace "configmap-2284" to be "success or failure"
Feb 11 15:08:14.255: INFO: Pod "pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.414552ms
Feb 11 15:08:16.269: INFO: Pod "pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022496478s
Feb 11 15:08:18.276: INFO: Pod "pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028943223s
Feb 11 15:08:20.285: INFO: Pod "pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038435963s
Feb 11 15:08:22.298: INFO: Pod "pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051551767s
STEP: Saw pod success
Feb 11 15:08:22.298: INFO: Pod "pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f" satisfied condition "success or failure"
Feb 11 15:08:22.303: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f container configmap-volume-test: 
STEP: delete the pod
Feb 11 15:08:22.401: INFO: Waiting for pod pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f to disappear
Feb 11 15:08:22.438: INFO: Pod pod-configmaps-8aa0b12b-7a8d-4746-8187-028d92decc2f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:08:22.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2284" for this suite.
Feb 11 15:08:28.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:08:28.669: INFO: namespace configmap-2284 deletion completed in 6.198371341s

• [SLOW TEST:14.540 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:08:28.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 11 15:08:28.778: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 11 15:08:33.818: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:08:33.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6812" for this suite.
Feb 11 15:08:42.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:08:42.167: INFO: namespace replication-controller-6812 deletion completed in 8.253622652s

• [SLOW TEST:13.498 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:08:42.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-9181
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9181 to expose endpoints map[]
Feb 11 15:08:42.483: INFO: successfully validated that service endpoint-test2 in namespace services-9181 exposes endpoints map[] (74.880488ms elapsed)
STEP: Creating pod pod1 in namespace services-9181
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9181 to expose endpoints map[pod1:[80]]
Feb 11 15:08:46.608: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.098294976s elapsed, will retry)
Feb 11 15:08:51.717: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.20760418s elapsed, will retry)
Feb 11 15:08:52.732: INFO: successfully validated that service endpoint-test2 in namespace services-9181 exposes endpoints map[pod1:[80]] (10.222021363s elapsed)
STEP: Creating pod pod2 in namespace services-9181
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9181 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 11 15:08:56.957: INFO: Unexpected endpoints: found map[240756f6-7136-4da2-b7d9-ebd68fd7e071:[80]], expected map[pod1:[80] pod2:[80]] (4.217537831s elapsed, will retry)
Feb 11 15:09:00.011: INFO: successfully validated that service endpoint-test2 in namespace services-9181 exposes endpoints map[pod1:[80] pod2:[80]] (7.271865355s elapsed)
STEP: Deleting pod pod1 in namespace services-9181
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9181 to expose endpoints map[pod2:[80]]
Feb 11 15:09:01.109: INFO: successfully validated that service endpoint-test2 in namespace services-9181 exposes endpoints map[pod2:[80]] (1.043067212s elapsed)
STEP: Deleting pod pod2 in namespace services-9181
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9181 to expose endpoints map[]
Feb 11 15:09:02.419: INFO: successfully validated that service endpoint-test2 in namespace services-9181 exposes endpoints map[] (1.301958594s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:09:03.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9181" for this suite.
Feb 11 15:09:25.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:09:25.184: INFO: namespace services-9181 deletion completed in 22.13788575s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:43.017 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:09:25.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-53446d82-a010-4b03-b066-4029bd2468c4
STEP: Creating configMap with name cm-test-opt-upd-951fdc43-b8a8-48ec-884b-4e380a6149d2
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-53446d82-a010-4b03-b066-4029bd2468c4
STEP: Updating configmap cm-test-opt-upd-951fdc43-b8a8-48ec-884b-4e380a6149d2
STEP: Creating configMap with name cm-test-opt-create-8acbccc2-4f35-4ec1-b5f0-88af09bf94b3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:09:39.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7103" for this suite.
Feb 11 15:10:01.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:10:01.909: INFO: namespace projected-7103 deletion completed in 22.179461059s

• [SLOW TEST:36.723 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:10:01.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e0e98366-0a7a-4364-99ee-c5e63b066283
STEP: Creating a pod to test consume configMaps
Feb 11 15:10:02.013: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af" in namespace "projected-1713" to be "success or failure"
Feb 11 15:10:02.020: INFO: Pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af": Phase="Pending", Reason="", readiness=false. Elapsed: 7.437791ms
Feb 11 15:10:04.028: INFO: Pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014875194s
Feb 11 15:10:06.035: INFO: Pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022570144s
Feb 11 15:10:08.042: INFO: Pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029604134s
Feb 11 15:10:10.050: INFO: Pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037205416s
Feb 11 15:10:12.061: INFO: Pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048542257s
STEP: Saw pod success
Feb 11 15:10:12.062: INFO: Pod "pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af" satisfied condition "success or failure"
Feb 11 15:10:12.071: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 15:10:12.154: INFO: Waiting for pod pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af to disappear
Feb 11 15:10:12.159: INFO: Pod pod-projected-configmaps-54e7d27a-f293-4e56-82f4-bd45fc5d71af no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:10:12.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1713" for this suite.
Feb 11 15:10:18.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:10:18.365: INFO: namespace projected-1713 deletion completed in 6.160132639s

• [SLOW TEST:16.454 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 11 15:10:18.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 11 15:10:18.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f" in namespace "projected-1617" to be "success or failure"
Feb 11 15:10:18.462: INFO: Pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.19852ms
Feb 11 15:10:20.479: INFO: Pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025785779s
Feb 11 15:10:22.495: INFO: Pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042740287s
Feb 11 15:10:24.511: INFO: Pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058265386s
Feb 11 15:10:26.533: INFO: Pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f": Phase="Running", Reason="", readiness=true. Elapsed: 8.079902838s
Feb 11 15:10:28.553: INFO: Pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100417394s
STEP: Saw pod success
Feb 11 15:10:28.554: INFO: Pod "downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f" satisfied condition "success or failure"
Feb 11 15:10:28.560: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f container client-container: 
STEP: delete the pod
Feb 11 15:10:28.646: INFO: Waiting for pod downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f to disappear
Feb 11 15:10:28.703: INFO: Pod downwardapi-volume-12201421-ac88-4678-8699-c21b4b91010f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 11 15:10:28.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1617" for this suite.
Feb 11 15:10:34.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 11 15:10:34.981: INFO: namespace projected-1617 deletion completed in 6.216012625s

• [SLOW TEST:16.614 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSFeb 11 15:10:34.981: INFO: Running AfterSuite actions on all nodes
Feb 11 15:10:34.981: INFO: Running AfterSuite actions on node 1
Feb 11 15:10:34.981: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8055.615 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS