I0715 12:55:48.840247 6 e2e.go:243] Starting e2e run "ef314d12-e6fb-4f53-a950-ab1d6803a998" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1594817748 - Will randomize all specs Will run 215 of 4413 specs Jul 15 12:55:49.029: INFO: >>> kubeConfig: /root/.kube/config Jul 15 12:55:49.033: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 15 12:55:49.049: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 15 12:55:49.078: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 15 12:55:49.078: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 15 12:55:49.078: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 15 12:55:49.085: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 15 12:55:49.085: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 15 12:55:49.085: INFO: e2e test version: v1.15.12 Jul 15 12:55:49.086: INFO: kube-apiserver version: v1.15.11 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:55:49.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jul 15 12:55:49.148: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 12:55:49.154: INFO: Waiting up to 5m0s for pod "downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b" in namespace "projected-148" to be "success or failure" Jul 15 12:55:49.202: INFO: Pod "downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.551156ms Jul 15 12:55:51.277: INFO: Pod "downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122035877s Jul 15 12:55:53.281: INFO: Pod "downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b": Phase="Running", Reason="", readiness=true. Elapsed: 4.126358056s Jul 15 12:55:55.285: INFO: Pod "downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130630988s STEP: Saw pod success Jul 15 12:55:55.285: INFO: Pod "downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b" satisfied condition "success or failure" Jul 15 12:55:55.288: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b container client-container: STEP: delete the pod Jul 15 12:55:55.308: INFO: Waiting for pod downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b to disappear Jul 15 12:55:55.313: INFO: Pod downwardapi-volume-047e7e72-90f5-4254-a92e-05484418760b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:55:55.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-148" for this suite. Jul 15 12:56:01.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:56:01.454: INFO: namespace projected-148 deletion completed in 6.137431541s • [SLOW TEST:12.366 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:56:01.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e1cead1c-5d1d-4413-9a84-7ba9ae52efb1 in namespace container-probe-8437 Jul 15 12:56:05.576: INFO: Started pod liveness-e1cead1c-5d1d-4413-9a84-7ba9ae52efb1 in namespace container-probe-8437 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 12:56:05.579: INFO: Initial restart count of pod liveness-e1cead1c-5d1d-4413-9a84-7ba9ae52efb1 is 0 Jul 15 12:56:25.622: INFO: Restart count of pod container-probe-8437/liveness-e1cead1c-5d1d-4413-9a84-7ba9ae52efb1 is now 1 (20.042691606s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:56:25.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8437" for this suite. Jul 15 12:56:31.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:56:31.733: INFO: namespace container-probe-8437 deletion completed in 6.094891203s • [SLOW TEST:30.279 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:56:31.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 15 12:56:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3485' Jul 15 12:56:34.280: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 15 12:56:34.280: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jul 15 12:56:34.286: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 15 12:56:34.318: INFO: scanned /root for discovery docs: Jul 15 12:56:34.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3485' Jul 15 12:56:50.177: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 15 12:56:50.177: INFO: stdout: "Created e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c\nScaling up e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jul 15 12:56:50.177: INFO: stdout: "Created e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c\nScaling up e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jul 15 12:56:50.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3485' Jul 15 12:56:50.301: INFO: stderr: "" Jul 15 12:56:50.301: INFO: stdout: "e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c-642xq " Jul 15 12:56:50.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c-642xq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3485' Jul 15 12:56:50.396: INFO: stderr: "" Jul 15 12:56:50.396: INFO: stdout: "true" Jul 15 12:56:50.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c-642xq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3485' Jul 15 12:56:50.482: INFO: stderr: "" Jul 15 12:56:50.482: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jul 15 12:56:50.482: INFO: e2e-test-nginx-rc-27fe9e002350831ac62cf029bb366d7c-642xq is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jul 15 12:56:50.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3485' Jul 15 12:56:50.577: INFO: stderr: "" Jul 15 12:56:50.577: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:56:50.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3485" for this suite. Jul 15 12:57:12.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:57:12.701: INFO: namespace kubectl-3485 deletion completed in 22.107452595s • [SLOW TEST:40.967 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:57:12.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2764 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2764 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2764 Jul 15 12:57:12.827: INFO: Found 0 stateful pods, waiting for 1 Jul 15 12:57:22.835: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 15 12:57:22.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 12:57:23.136: INFO: stderr: "I0715 12:57:22.975361 162 log.go:172] (0xc000a880b0) (0xc000550460) Create stream\nI0715 12:57:22.975419 162 log.go:172] (0xc000a880b0) (0xc000550460) Stream added, broadcasting: 1\nI0715 12:57:22.978126 162 log.go:172] (0xc000a880b0) Reply frame received for 1\nI0715 12:57:22.978220 162 log.go:172] (0xc000a880b0) (0xc0002f0000) Create stream\nI0715 12:57:22.978252 162 log.go:172] (0xc000a880b0) (0xc0002f0000) Stream added, broadcasting: 3\nI0715 12:57:22.979511 162 log.go:172] (0xc000a880b0) Reply frame received for 3\nI0715 12:57:22.979573 162 log.go:172] (0xc000a880b0) (0xc00031e000) Create stream\nI0715 12:57:22.979602 162 log.go:172] (0xc000a880b0) (0xc00031e000) Stream added, broadcasting: 5\nI0715 12:57:22.980662 162 log.go:172] (0xc000a880b0) Reply frame received for 5\nI0715 12:57:23.090154 162 log.go:172] (0xc000a880b0) Data frame received for 5\nI0715 12:57:23.090191 162 log.go:172] (0xc00031e000) (5) Data frame handling\nI0715 12:57:23.090212 162 log.go:172] (0xc00031e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 12:57:23.127338 162 log.go:172] (0xc000a880b0) Data frame received for 3\nI0715 12:57:23.127372 162 log.go:172] (0xc0002f0000) (3) Data frame handling\nI0715 12:57:23.127396 162 log.go:172] (0xc0002f0000) (3) Data frame sent\nI0715 12:57:23.127410 162 log.go:172] (0xc000a880b0) Data frame received for 3\nI0715 12:57:23.127422 162 log.go:172] (0xc0002f0000) (3) Data frame handling\nI0715 12:57:23.127699 162 log.go:172] (0xc000a880b0) Data frame received for 5\nI0715 12:57:23.127714 162 log.go:172] (0xc00031e000) (5) Data frame handling\nI0715 12:57:23.130245 162 log.go:172] (0xc000a880b0) Data frame received for 1\nI0715 12:57:23.130268 162 log.go:172] (0xc000550460) (1) Data frame handling\nI0715 12:57:23.130280 162 log.go:172] (0xc000550460) (1) Data frame sent\nI0715 12:57:23.130321 162 log.go:172] (0xc000a880b0) (0xc000550460) Stream removed, broadcasting: 1\nI0715 12:57:23.130361 162 log.go:172] (0xc000a880b0) Go away received\nI0715 12:57:23.130797 162 log.go:172] (0xc000a880b0) (0xc000550460) Stream removed, broadcasting: 1\nI0715 12:57:23.130835 162 log.go:172] (0xc000a880b0) (0xc0002f0000) Stream removed, broadcasting: 3\nI0715 12:57:23.130854 162 log.go:172] (0xc000a880b0) (0xc00031e000) Stream removed, broadcasting: 5\n" Jul 15 12:57:23.136: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 12:57:23.136: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 12:57:23.140: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 15 12:57:33.145: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 12:57:33.145: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 12:57:33.180: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999559s Jul 15 12:57:34.184: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.975653545s Jul 15 12:57:35.189: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970801415s Jul 15 12:57:36.192: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.966065955s Jul 15 12:57:37.252: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962831141s Jul 15 12:57:38.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.903014561s Jul 15 12:57:39.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.886442529s Jul 15 12:57:40.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.864280225s Jul 15 12:57:41.301: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.858372531s Jul 15 12:57:42.444: INFO: Verifying statefulset ss doesn't scale past 1 for another 853.931281ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2764 Jul 15 12:57:43.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 12:57:43.879: INFO: stderr: "I0715 12:57:43.757421 188 log.go:172] (0xc0007f8630) (0xc000692aa0) Create stream\nI0715 12:57:43.757483 188 log.go:172] (0xc0007f8630) (0xc000692aa0) Stream added, broadcasting: 1\nI0715 12:57:43.761335 188 log.go:172] (0xc0007f8630) Reply frame received for 1\nI0715 12:57:43.761398 188 log.go:172] (0xc0007f8630) (0xc000616000) Create stream\nI0715 12:57:43.761425 188 log.go:172] (0xc0007f8630) (0xc000616000) Stream added, broadcasting: 3\nI0715 12:57:43.762273 188 log.go:172] (0xc0007f8630) Reply frame received for 3\nI0715 12:57:43.762347 188 log.go:172] (0xc0007f8630) (0xc000692320) Create stream\nI0715 12:57:43.762377 188 log.go:172] (0xc0007f8630) (0xc000692320) Stream added, broadcasting: 5\nI0715 12:57:43.763234 188 log.go:172] (0xc0007f8630) Reply frame received for 5\nI0715 12:57:43.815867 188 log.go:172] (0xc0007f8630) Data frame received for 5\nI0715 12:57:43.815901 188 log.go:172] (0xc000692320) (5) Data frame handling\nI0715 12:57:43.815922 188 log.go:172] (0xc000692320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0715 12:57:43.872078 188 log.go:172] (0xc0007f8630) Data frame received for 3\nI0715 12:57:43.872111 188 log.go:172] (0xc000616000) (3) Data frame handling\nI0715 12:57:43.872131 188 log.go:172] (0xc000616000) (3) Data frame sent\nI0715 12:57:43.872142 188 log.go:172] (0xc0007f8630) Data frame received for 3\nI0715 12:57:43.872152 188 log.go:172] (0xc000616000) (3) Data frame handling\nI0715 12:57:43.872440 188 log.go:172] (0xc0007f8630) Data frame received for 5\nI0715 12:57:43.872471 188 log.go:172] (0xc000692320) (5) Data frame handling\nI0715 12:57:43.875270 188 log.go:172] (0xc0007f8630) Data frame received for 1\nI0715 12:57:43.875307 188 log.go:172] (0xc000692aa0) (1) Data frame handling\nI0715 12:57:43.875322 188 log.go:172] (0xc000692aa0) (1) Data frame sent\nI0715 12:57:43.875359 188 log.go:172] (0xc0007f8630) (0xc000692aa0) Stream removed, broadcasting: 1\nI0715 12:57:43.875391 188 log.go:172] (0xc0007f8630) Go away received\nI0715 12:57:43.875937 188 log.go:172] (0xc0007f8630) (0xc000692aa0) Stream removed, broadcasting: 1\nI0715 12:57:43.875966 188 log.go:172] (0xc0007f8630) (0xc000616000) Stream removed, broadcasting: 3\nI0715 12:57:43.875979 188 log.go:172] (0xc0007f8630) (0xc000692320) Stream removed, broadcasting: 5\n" Jul 15 12:57:43.880: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 12:57:43.880: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 12:57:43.922: INFO: Found 1 stateful pods, waiting for 3 Jul 15 12:57:53.927: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 12:57:53.927: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 12:57:53.927: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 15 12:57:53.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 12:57:54.138: INFO: stderr: "I0715 12:57:54.067394 209 log.go:172] (0xc000912580) (0xc0008e88c0) Create stream\nI0715 12:57:54.067481 209 log.go:172] (0xc000912580) (0xc0008e88c0) Stream added, broadcasting: 1\nI0715 12:57:54.070206 209 log.go:172] (0xc000912580) Reply frame received for 1\nI0715 12:57:54.070265 209 log.go:172] (0xc000912580) (0xc0008ae000) Create stream\nI0715 12:57:54.070287 209 log.go:172] (0xc000912580) (0xc0008ae000) Stream added, broadcasting: 3\nI0715 12:57:54.071362 209 log.go:172] (0xc000912580) Reply frame received for 3\nI0715 12:57:54.071406 209 log.go:172] (0xc000912580) (0xc00074a000) Create stream\nI0715 12:57:54.071423 209 log.go:172] (0xc000912580) (0xc00074a000) Stream added, broadcasting: 5\nI0715 12:57:54.072325 209 log.go:172] (0xc000912580) Reply frame received for 5\nI0715 12:57:54.131892 209 log.go:172] (0xc000912580) Data frame received for 5\nI0715 12:57:54.131936 209 log.go:172] (0xc00074a000) (5) Data frame handling\nI0715 12:57:54.131955 209 log.go:172] (0xc00074a000) (5) Data frame sent\nI0715 12:57:54.131971 209 log.go:172] (0xc000912580) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 12:57:54.132645 209 log.go:172] (0xc00074a000) (5) Data frame handling\nI0715 12:57:54.132686 209 log.go:172] (0xc000912580) Data frame received for 3\nI0715 12:57:54.132701 209 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0715 12:57:54.132828 209 log.go:172] (0xc0008ae000) (3) Data frame sent\nI0715 12:57:54.132851 209 log.go:172] (0xc000912580) Data frame received for 3\nI0715 12:57:54.132865 209 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0715 12:57:54.133624 209 log.go:172] (0xc000912580) Data frame received for 1\nI0715 12:57:54.133671 209 log.go:172] (0xc0008e88c0) (1) Data frame handling\nI0715 12:57:54.133702 209 log.go:172] (0xc0008e88c0) (1) Data frame sent\nI0715 12:57:54.133725 209 log.go:172] (0xc000912580) (0xc0008e88c0) Stream removed, broadcasting: 1\nI0715 12:57:54.133754 209 log.go:172] (0xc000912580) Go away received\nI0715 12:57:54.134119 209 log.go:172] (0xc000912580) (0xc0008e88c0) Stream removed, broadcasting: 1\nI0715 12:57:54.134136 209 log.go:172] (0xc000912580) (0xc0008ae000) Stream removed, broadcasting: 3\nI0715 12:57:54.134144 209 log.go:172] (0xc000912580) (0xc00074a000) Stream removed, broadcasting: 5\n" Jul 15 12:57:54.138: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 12:57:54.138: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 12:57:54.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 12:57:54.382: INFO: stderr: "I0715 12:57:54.259971 229 log.go:172] (0xc00094a580) (0xc00033a820) Create stream\nI0715 12:57:54.260028 229 log.go:172] (0xc00094a580) (0xc00033a820) Stream added, broadcasting: 1\nI0715 12:57:54.262554 229 log.go:172] (0xc00094a580) Reply frame received for 1\nI0715 12:57:54.262617 229 log.go:172] (0xc00094a580) (0xc0009b8000) Create stream\nI0715 12:57:54.262647 229 log.go:172] (0xc00094a580) (0xc0009b8000) Stream added, broadcasting: 3\nI0715 12:57:54.263459 229 log.go:172] (0xc00094a580) Reply frame received for 3\nI0715 12:57:54.263508 229 log.go:172] (0xc00094a580) (0xc00033a8c0) Create stream\nI0715 12:57:54.263541 229 log.go:172] (0xc00094a580) (0xc00033a8c0) Stream added, broadcasting: 5\nI0715 12:57:54.264443 229 log.go:172] (0xc00094a580) Reply frame received for 5\nI0715 12:57:54.347856 229 log.go:172] (0xc00094a580) Data frame received for 5\nI0715 12:57:54.347900 229 log.go:172] (0xc00033a8c0) (5) Data frame handling\nI0715 12:57:54.347927 229 log.go:172] (0xc00033a8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 12:57:54.370802 229 log.go:172] (0xc00094a580) Data frame received for 3\nI0715 12:57:54.370826 229 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0715 12:57:54.370837 229 log.go:172] (0xc0009b8000) (3) Data frame sent\nI0715 12:57:54.372129 229 log.go:172] (0xc00094a580) Data frame received for 3\nI0715 12:57:54.372159 229 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0715 12:57:54.372565 229 log.go:172] (0xc00094a580) Data frame received for 5\nI0715 12:57:54.372597 229 log.go:172] (0xc00033a8c0) (5) Data frame handling\nI0715 12:57:54.375198 229 log.go:172] (0xc00094a580) Data frame received for 1\nI0715 12:57:54.375231 229 log.go:172] (0xc00033a820) (1) Data frame handling\nI0715 12:57:54.375250 229 log.go:172] (0xc00033a820) (1) Data frame sent\nI0715 12:57:54.375268 229 log.go:172] (0xc00094a580) (0xc00033a820) Stream removed, broadcasting: 1\nI0715 12:57:54.375297 229 log.go:172] (0xc00094a580) Go away received\nI0715 12:57:54.375689 229 log.go:172] (0xc00094a580) (0xc00033a820) Stream removed, broadcasting: 1\nI0715 12:57:54.375715 229 log.go:172] (0xc00094a580) (0xc0009b8000) Stream removed, broadcasting: 3\nI0715 12:57:54.375726 229 log.go:172] (0xc00094a580) (0xc00033a8c0) Stream removed, broadcasting: 5\n" Jul 15 12:57:54.382: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 12:57:54.382: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 12:57:54.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 12:57:54.697: INFO: stderr: "I0715 12:57:54.506477 249 log.go:172] (0xc00012ce70) (0xc000696c80) Create stream\nI0715 12:57:54.506530 249 log.go:172] (0xc00012ce70) (0xc000696c80) Stream added, broadcasting: 1\nI0715 12:57:54.508992 249 log.go:172] (0xc00012ce70) Reply frame received for 1\nI0715 12:57:54.509022 249 log.go:172] (0xc00012ce70) (0xc000696d20) Create stream\nI0715 12:57:54.509030 249 log.go:172] (0xc00012ce70) (0xc000696d20) Stream added, broadcasting: 3\nI0715 12:57:54.509939 249 log.go:172] (0xc00012ce70) Reply frame received for 3\nI0715 12:57:54.509976 249 log.go:172] (0xc00012ce70) (0xc00093c000) Create stream\nI0715 12:57:54.510015 249 log.go:172] (0xc00012ce70) (0xc00093c000) Stream added, broadcasting: 5\nI0715 12:57:54.510925 249 log.go:172] (0xc00012ce70) Reply frame received for 5\nI0715 12:57:54.575701 249 log.go:172] (0xc00012ce70) Data frame received for 5\nI0715 12:57:54.575719 249 log.go:172] (0xc00093c000) (5) Data frame handling\nI0715 12:57:54.575732 249 log.go:172] (0xc00093c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 12:57:54.689039 249 log.go:172] (0xc00012ce70) Data frame received for 3\nI0715 12:57:54.689065 249 log.go:172] (0xc000696d20) (3) Data frame handling\nI0715 12:57:54.689077 249 log.go:172] (0xc000696d20) (3) Data frame sent\nI0715 12:57:54.689082 249 log.go:172] (0xc00012ce70) Data frame received for 3\nI0715 12:57:54.689087 249 log.go:172] (0xc000696d20) (3) Data frame handling\nI0715 12:57:54.689144 249 log.go:172] (0xc00012ce70) Data frame received for 5\nI0715 12:57:54.689156 249 log.go:172] (0xc00093c000) (5) Data frame handling\nI0715 12:57:54.692195 249 log.go:172] (0xc00012ce70) Data frame received for 1\nI0715 12:57:54.692217 249 log.go:172] (0xc000696c80) (1) Data frame handling\nI0715 12:57:54.692231 249 log.go:172] (0xc000696c80) (1) Data frame sent\nI0715 12:57:54.692394 249 log.go:172] (0xc00012ce70) (0xc000696c80) Stream removed, broadcasting: 1\nI0715 12:57:54.692500 249 log.go:172] (0xc00012ce70) Go away received\nI0715 12:57:54.692792 249 log.go:172] (0xc00012ce70) (0xc000696c80) Stream removed, broadcasting: 1\nI0715 12:57:54.692827 249 log.go:172] (0xc00012ce70) (0xc000696d20) Stream removed, broadcasting: 3\nI0715 12:57:54.692834 249 log.go:172] (0xc00012ce70) (0xc00093c000) Stream removed, broadcasting: 5\n" Jul 15 12:57:54.697: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 12:57:54.697: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 12:57:54.697: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 12:57:54.700: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 15 12:58:04.706: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 12:58:04.706: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 15 12:58:04.706: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 15 12:58:04.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999722s Jul 15 12:58:05.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.889922018s Jul 15 12:58:06.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.885021993s Jul 15 12:58:07.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.879747049s Jul 15 12:58:08.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.875571783s Jul 15 12:58:09.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.870600298s Jul 15 12:58:10.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.848358143s Jul 15 12:58:11.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.843233895s Jul 15 12:58:12.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.838099898s Jul 15 12:58:13.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 832.794995ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2764 Jul 15 12:58:14.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 12:58:15.134: INFO: stderr: "I0715 12:58:15.026662 269 log.go:172] (0xc000aba210) (0xc0003aa140) Create stream\nI0715 12:58:15.026738 269 log.go:172] (0xc000aba210) (0xc0003aa140) Stream added, broadcasting: 1\nI0715 12:58:15.029025 269 log.go:172] (0xc000aba210) Reply frame received for 1\nI0715 12:58:15.029084 269 log.go:172] (0xc000aba210) (0xc000728140) Create stream\nI0715 12:58:15.029107 269 log.go:172] (0xc000aba210) (0xc000728140) Stream added, broadcasting: 3\nI0715 12:58:15.031302 269 log.go:172] (0xc000aba210) Reply frame received for 3\nI0715 12:58:15.031338 269 log.go:172] (0xc000aba210) (0xc0003aa1e0) Create stream\nI0715 12:58:15.031347 269 log.go:172] (0xc000aba210) (0xc0003aa1e0) Stream added, broadcasting: 5\nI0715 12:58:15.032401 269 log.go:172] (0xc000aba210) Reply frame received for 5\nI0715 12:58:15.109134 269 log.go:172] (0xc000aba210) Data frame received for 3\nI0715 12:58:15.109161 269 log.go:172] (0xc000728140) (3) Data frame handling\nI0715 12:58:15.109171 269 log.go:172] (0xc000728140) (3) Data frame sent\nI0715 12:58:15.109177 269 log.go:172] (0xc000aba210) Data frame received for 3\nI0715 12:58:15.109184 269 log.go:172] (0xc000728140) (3) Data frame handling\nI0715 12:58:15.109208 269 log.go:172] (0xc000aba210) Data frame received for 5\nI0715 12:58:15.109215 269 log.go:172] (0xc0003aa1e0) (5) Data frame handling\nI0715 12:58:15.109222 269 log.go:172] (0xc0003aa1e0) (5) Data frame sent\nI0715 12:58:15.109228 269 log.go:172] (0xc000aba210) Data frame received for 5\nI0715 12:58:15.109235 269 log.go:172] (0xc0003aa1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0715 12:58:15.129403 269 log.go:172] (0xc000aba210) Data frame received for 1\nI0715 12:58:15.129438 269 log.go:172] (0xc0003aa140) (1) Data frame handling\nI0715 12:58:15.129455 269 log.go:172] (0xc0003aa140) (1) Data frame sent\nI0715 12:58:15.129469 269 log.go:172] (0xc000aba210) (0xc0003aa140) Stream removed, broadcasting: 1\nI0715 12:58:15.129481 269 log.go:172] (0xc000aba210) Go away received\nI0715 12:58:15.129838 269 log.go:172] (0xc000aba210) (0xc0003aa140) Stream removed, broadcasting: 1\nI0715 12:58:15.129858 269 log.go:172] (0xc000aba210) (0xc000728140) Stream removed, broadcasting: 3\nI0715 12:58:15.129867 269 log.go:172] (0xc000aba210) (0xc0003aa1e0) Stream removed, broadcasting: 5\n" Jul 15 12:58:15.134: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 12:58:15.134: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 12:58:15.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 12:58:15.423: INFO: stderr: "I0715 12:58:15.349105 291 log.go:172] (0xc0006a0630) (0xc0005e0be0) Create stream\nI0715 12:58:15.349165 291 log.go:172] (0xc0006a0630) (0xc0005e0be0) Stream added, broadcasting: 1\nI0715 12:58:15.352196 291 log.go:172] (0xc0006a0630) Reply frame received for 1\nI0715 12:58:15.352250 291 log.go:172] (0xc0006a0630) (0xc0005e0320) Create stream\nI0715 12:58:15.352265 291 log.go:172] (0xc0006a0630) (0xc0005e0320) Stream added, broadcasting: 3\nI0715 12:58:15.353200 291 log.go:172] (0xc0006a0630) Reply frame received for 3\nI0715 12:58:15.353233 291 log.go:172] (0xc0006a0630) (0xc000014000) Create stream\nI0715 12:58:15.353242 291 log.go:172] (0xc0006a0630) (0xc000014000) Stream added, broadcasting: 5\nI0715 12:58:15.354053 291 log.go:172] (0xc0006a0630) Reply frame received for 5\nI0715 12:58:15.418689 291 log.go:172] (0xc0006a0630) Data frame received for 3\nI0715 12:58:15.418733 291 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0715 12:58:15.418747 291 log.go:172] (0xc0005e0320) (3) Data frame sent\nI0715 12:58:15.418757 291 log.go:172] (0xc0006a0630) Data frame received for 3\nI0715 12:58:15.418764 291 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0715 12:58:15.418800 291 log.go:172] (0xc0006a0630) Data frame received for 5\nI0715 12:58:15.418812 291 log.go:172] (0xc000014000) (5) Data frame handling\nI0715 12:58:15.418827 291 log.go:172] (0xc000014000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0715 12:58:15.418840 291 log.go:172] (0xc0006a0630) Data frame received for 5\nI0715 12:58:15.418893 291 log.go:172] (0xc000014000) (5) Data frame handling\nI0715 12:58:15.419787 291 log.go:172] (0xc0006a0630) Data frame received for 1\nI0715 12:58:15.419804 291 log.go:172] (0xc0005e0be0) (1) Data frame handling\nI0715 12:58:15.419815 291 log.go:172] (0xc0005e0be0) (1) Data frame sent\nI0715 12:58:15.419837 291 log.go:172] (0xc0006a0630) (0xc0005e0be0) Stream removed, broadcasting: 1\nI0715 12:58:15.419864 291 log.go:172] (0xc0006a0630) Go away received\nI0715 12:58:15.420091 291 log.go:172] (0xc0006a0630) (0xc0005e0be0) Stream removed, broadcasting: 1\nI0715 12:58:15.420103 291 log.go:172] (0xc0006a0630) (0xc0005e0320) Stream removed, broadcasting: 3\nI0715 12:58:15.420109 291 log.go:172] (0xc0006a0630) (0xc000014000) Stream removed, broadcasting: 5\n" Jul 15 12:58:15.424: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 12:58:15.424: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 12:58:15.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2764 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 12:58:15.631: INFO: stderr: "I0715 12:58:15.561296 311 log.go:172] (0xc000954370) (0xc000304820) Create stream\nI0715 12:58:15.561358 311 log.go:172] (0xc000954370) (0xc000304820) Stream added, broadcasting: 1\nI0715 12:58:15.563394 311 log.go:172] (0xc000954370) Reply frame received for 1\nI0715 12:58:15.563436 311 log.go:172] (0xc000954370) (0xc0005343c0) Create stream\nI0715 12:58:15.563448 311 log.go:172] (0xc000954370) (0xc0005343c0) Stream added, broadcasting: 3\nI0715 12:58:15.564484 311 log.go:172] (0xc000954370) Reply frame received for 3\nI0715 12:58:15.564525 311 log.go:172] (0xc000954370) (0xc0003048c0) Create stream\nI0715 12:58:15.564544 311 log.go:172] (0xc000954370) (0xc0003048c0) Stream added, broadcasting: 5\nI0715 12:58:15.565524 311 log.go:172] (0xc000954370) Reply frame received for 5\nI0715 12:58:15.624370 311 log.go:172] (0xc000954370) Data frame received for 3\nI0715 12:58:15.624400 311 log.go:172] (0xc0005343c0) (3) Data frame handling\nI0715 12:58:15.624421 311 log.go:172] (0xc0005343c0) (3) Data frame sent\nI0715 12:58:15.624601 311 log.go:172] (0xc000954370) Data frame received for 5\nI0715 12:58:15.624621 311 log.go:172] (0xc0003048c0) (5) Data frame handling\nI0715 12:58:15.624640 311 log.go:172] (0xc0003048c0) (5) Data frame sent\nI0715 12:58:15.624649 311 log.go:172] (0xc000954370) Data frame received for 5\nI0715 12:58:15.624660 311 log.go:172] (0xc0003048c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0715 12:58:15.625084 311 log.go:172] (0xc000954370) Data frame received for 3\nI0715 12:58:15.625123 311 log.go:172] (0xc0005343c0) (3) Data frame handling\nI0715 12:58:15.627696 311 log.go:172] (0xc000954370) Data frame received for 1\nI0715 12:58:15.627712 311 log.go:172] (0xc000304820) (1) Data frame handling\nI0715 12:58:15.627728 311 log.go:172] (0xc000304820) (1) Data frame sent\nI0715 12:58:15.627737 311 log.go:172] (0xc000954370) (0xc000304820) Stream removed, broadcasting: 1\nI0715 12:58:15.627950 311 log.go:172] (0xc000954370) Go away received\nI0715 12:58:15.627992 311 log.go:172] (0xc000954370) (0xc000304820) Stream removed, broadcasting: 1\nI0715 12:58:15.628024 311 log.go:172] (0xc000954370) (0xc0005343c0) Stream removed, broadcasting: 3\nI0715 12:58:15.628037 311 log.go:172] (0xc000954370) (0xc0003048c0) Stream removed, broadcasting: 5\n" Jul 15 12:58:15.631: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 12:58:15.631: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 12:58:15.631: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 15 12:58:45.646: INFO: Deleting all statefulset in ns statefulset-2764 Jul 15 12:58:45.649: INFO: Scaling statefulset ss to 0 Jul 15 12:58:45.659: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 12:58:45.661: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:58:45.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2764" for this suite. Jul 15 12:58:51.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:58:51.763: INFO: namespace statefulset-2764 deletion completed in 6.083786639s • [SLOW TEST:99.062 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:58:51.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-b7195860-c8f8-4e3a-804a-11299b9faab6 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:58:51.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4045" for this suite. Jul 15 12:58:57.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:58:57.920: INFO: namespace secrets-4045 deletion completed in 6.095319721s • [SLOW TEST:6.157 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:58:57.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6c52df3b-626b-40a6-a59c-072db1933f26 STEP: Creating a pod to test consume secrets Jul 15 12:58:58.005: INFO: Waiting up to 5m0s for pod "pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d" in namespace "secrets-4025" to be "success or failure" Jul 15 12:58:58.022: INFO: Pod "pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.989419ms Jul 15 12:59:00.027: INFO: Pod "pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021143122s Jul 15 12:59:02.031: INFO: Pod "pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025274182s STEP: Saw pod success Jul 15 12:59:02.031: INFO: Pod "pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d" satisfied condition "success or failure" Jul 15 12:59:02.034: INFO: Trying to get logs from node iruya-worker pod pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d container secret-volume-test: STEP: delete the pod Jul 15 12:59:02.077: INFO: Waiting for pod pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d to disappear Jul 15 12:59:02.097: INFO: Pod pod-secrets-be919475-c107-4201-b54d-7dc0e54ede6d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:59:02.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4025" for this suite. Jul 15 12:59:08.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:59:08.204: INFO: namespace secrets-4025 deletion completed in 6.103268181s • [SLOW TEST:10.283 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:59:08.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 12:59:08.295: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 15 12:59:13.299: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 15 12:59:13.299: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 15 12:59:15.303: INFO: Creating deployment "test-rollover-deployment" Jul 15 12:59:15.315: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 15 12:59:17.341: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 15 12:59:17.387: INFO: Ensure that both replica sets have 1 created replica Jul 15 12:59:17.393: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 15 12:59:17.398: INFO: Updating deployment test-rollover-deployment Jul 15 12:59:17.398: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 15 12:59:19.498: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 15 12:59:19.504: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 15 12:59:19.510: INFO: all replica sets need to contain the pod-template-hash label Jul 15 12:59:19.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414757, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 12:59:21.519: INFO: all replica sets need to contain the pod-template-hash label Jul 15 12:59:21.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414761, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 12:59:23.518: INFO: all replica sets need to contain the pod-template-hash label Jul 15 12:59:23.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414761, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 12:59:25.518: INFO: all replica sets need to contain the pod-template-hash label Jul 15 12:59:25.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414761, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 12:59:27.526: INFO: all replica sets need to contain the pod-template-hash label Jul 15 12:59:27.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414761, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 12:59:29.519: INFO: all replica sets need to contain the pod-template-hash label Jul 15 12:59:29.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414761, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 12:59:31.555: INFO: Jul 15 12:59:31.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414771, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730414755, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 12:59:33.519: INFO: Jul 15 12:59:33.519: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 15 12:59:33.527: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-76,SelfLink:/apis/apps/v1/namespaces/deployment-76/deployments/test-rollover-deployment,UID:73d06c69-5509-4e67-93f0-814b3280b8b6,ResourceVersion:1016881,Generation:2,CreationTimestamp:2020-07-15 12:59:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-15 12:59:15 +0000 UTC 2020-07-15 12:59:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-15 12:59:31 +0000 UTC 2020-07-15 12:59:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 15 12:59:33.530: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-76,SelfLink:/apis/apps/v1/namespaces/deployment-76/replicasets/test-rollover-deployment-854595fc44,UID:0a369679-6395-4af7-be2a-024a266b9141,ResourceVersion:1016869,Generation:2,CreationTimestamp:2020-07-15 12:59:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 73d06c69-5509-4e67-93f0-814b3280b8b6 0xc002afbc17 0xc002afbc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 15 12:59:33.530: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 15 12:59:33.530: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-76,SelfLink:/apis/apps/v1/namespaces/deployment-76/replicasets/test-rollover-controller,UID:742b3c7b-91ee-4f0f-9019-86c4953f341d,ResourceVersion:1016879,Generation:2,CreationTimestamp:2020-07-15 12:59:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 73d06c69-5509-4e67-93f0-814b3280b8b6 0xc002afbb47 0xc002afbb48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 15 12:59:33.530: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-76,SelfLink:/apis/apps/v1/namespaces/deployment-76/replicasets/test-rollover-deployment-9b8b997cf,UID:8565fe3e-77c8-4058-839b-62b5c038c080,ResourceVersion:1016789,Generation:2,CreationTimestamp:2020-07-15 12:59:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 73d06c69-5509-4e67-93f0-814b3280b8b6 0xc002afbce0 0xc002afbce1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 15 12:59:33.533: INFO: Pod "test-rollover-deployment-854595fc44-w6lp2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-w6lp2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-76,SelfLink:/api/v1/namespaces/deployment-76/pods/test-rollover-deployment-854595fc44-w6lp2,UID:59131e24-5d6d-45a9-8ac7-dbcfb4308e83,ResourceVersion:1016815,Generation:0,CreationTimestamp:2020-07-15 12:59:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0a369679-6395-4af7-be2a-024a266b9141 0xc002ce08f7 0xc002ce08f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kbdc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kbdc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-kbdc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ce0970} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ce0990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 12:59:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 12:59:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 12:59:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 12:59:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.104,StartTime:2020-07-15 12:59:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-15 12:59:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://2ab6e3797c156351c2574e881165b80fa839b270ad11cb41fbabb7aa08284b3a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:59:33.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-76" for this suite. Jul 15 12:59:41.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:59:41.673: INFO: namespace deployment-76 deletion completed in 8.135981669s • [SLOW TEST:33.469 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:59:41.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 15 12:59:41.750: INFO: Waiting up to 5m0s for pod "pod-397eac97-97c5-456e-9128-86dad758de4b" in namespace "emptydir-3335" to be "success or failure" Jul 15 12:59:41.753: INFO: Pod "pod-397eac97-97c5-456e-9128-86dad758de4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.987506ms Jul 15 12:59:43.756: INFO: Pod "pod-397eac97-97c5-456e-9128-86dad758de4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006646961s Jul 15 12:59:45.761: INFO: Pod "pod-397eac97-97c5-456e-9128-86dad758de4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011199576s STEP: Saw pod success Jul 15 12:59:45.761: INFO: Pod "pod-397eac97-97c5-456e-9128-86dad758de4b" satisfied condition "success or failure" Jul 15 12:59:45.764: INFO: Trying to get logs from node iruya-worker pod pod-397eac97-97c5-456e-9128-86dad758de4b container test-container: STEP: delete the pod Jul 15 12:59:45.788: INFO: Waiting for pod pod-397eac97-97c5-456e-9128-86dad758de4b to disappear Jul 15 12:59:45.793: INFO: Pod pod-397eac97-97c5-456e-9128-86dad758de4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:59:45.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3335" for this suite. Jul 15 12:59:51.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:59:51.979: INFO: namespace emptydir-3335 deletion completed in 6.183618167s • [SLOW TEST:10.306 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:59:51.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jul 15 12:59:52.049: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 15 12:59:52.091: INFO: Waiting for terminating namespaces to be deleted... Jul 15 12:59:52.094: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jul 15 12:59:52.100: INFO: kube-proxy-2pg5m from kube-system started at 2020-07-10 10:24:49 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.100: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 12:59:52.100: INFO: dnsutils from default started at 2020-07-10 11:15:11 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.100: INFO: Container dnsutils ready: true, restart count 121 Jul 15 12:59:52.100: INFO: live-test7-5dd99f9b45-jtpmp from default started at 2020-07-10 11:54:47 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.100: INFO: Container live-test7 ready: false, restart count 1396 Jul 15 12:59:52.100: INFO: kindnet-452tn from kube-system started at 2020-07-10 10:24:50 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.100: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 12:59:52.100: INFO: live-test4-74f5c7c95f-l2676 from default started at 2020-07-10 11:02:03 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.100: INFO: Container live-test4 ready: false, restart count 1412 Jul 15 12:59:52.100: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jul 15 12:59:52.109: INFO: live-test5-b6fcb7757-w869x from default started at 2020-07-10 11:06:28 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container live-test5 ready: false, restart count 1409 Jul 15 12:59:52.109: INFO: live-test2-54d9dcd87-bsdvc from default started at 2020-07-10 10:58:02 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container live-test2 ready: false, restart count 1413 Jul 15 12:59:52.109: INFO: live-test8-55669b464c-bfdv5 from default started at 2020-07-10 11:56:07 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container live-test8 ready: false, restart count 1399 Jul 15 12:59:52.109: INFO: kube-proxy-bf52l from kube-system started at 2020-07-10 10:24:49 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 12:59:52.109: INFO: kindnet-qpkmc from kube-system started at 2020-07-10 10:24:50 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 12:59:52.109: INFO: live-test3-6556bf7d77-2k9dg from default started at 2020-07-10 11:00:05 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container live-test3 ready: false, restart count 1409 Jul 15 12:59:52.109: INFO: live-test6-988dbb567-rqc7x from default started at 2020-07-10 11:22:41 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container live-test6 ready: false, restart count 1410 Jul 15 12:59:52.109: INFO: live-test1-677ffc8869-nvdk5 from default started at 2020-07-10 10:49:37 +0000 UTC (1 container statuses recorded) Jul 15 12:59:52.109: INFO: Container live-test1 ready: false, restart count 1412 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1621ee839e0f2110], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 12:59:53.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8600" for this suite. Jul 15 12:59:59.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 12:59:59.244: INFO: namespace sched-pred-8600 deletion completed in 6.109420903s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.265 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 12:59:59.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2554 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 15 12:59:59.323: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 15 13:00:27.417: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.3:8080/dial?request=hostName&protocol=udp&host=10.244.2.254&port=8081&tries=1'] Namespace:pod-network-test-2554 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:00:27.417: INFO: >>> kubeConfig: /root/.kube/config I0715 13:00:27.449365 6 log.go:172] (0xc002c46b00) (0xc001c68320) Create stream I0715 13:00:27.449397 6 log.go:172] (0xc002c46b00) (0xc001c68320) Stream added, broadcasting: 1 I0715 13:00:27.453313 6 log.go:172] (0xc002c46b00) Reply frame received for 1 I0715 13:00:27.453375 6 log.go:172] (0xc002c46b00) (0xc0021f4e60) Create stream I0715 13:00:27.453394 6 log.go:172] (0xc002c46b00) (0xc0021f4e60) Stream added, broadcasting: 3 I0715 13:00:27.456219 6 log.go:172] (0xc002c46b00) Reply frame received for 3 I0715 13:00:27.456256 6 log.go:172] (0xc002c46b00) (0xc001c683c0) Create stream I0715 13:00:27.456274 6 log.go:172] (0xc002c46b00) (0xc001c683c0) Stream added, broadcasting: 5 I0715 13:00:27.458716 6 log.go:172] (0xc002c46b00) Reply frame received for 5 I0715 13:00:27.563772 6 log.go:172] (0xc002c46b00) Data frame received for 3 I0715 13:00:27.563808 6 log.go:172] (0xc0021f4e60) (3) Data frame handling I0715 13:00:27.563829 6 log.go:172] (0xc0021f4e60) (3) Data frame sent I0715 13:00:27.564665 6 log.go:172] (0xc002c46b00) Data frame received for 3 I0715 13:00:27.564692 6 log.go:172] (0xc0021f4e60) (3) Data frame handling I0715 13:00:27.564935 6 log.go:172] (0xc002c46b00) Data frame received for 5 I0715 13:00:27.564971 6 log.go:172] (0xc001c683c0) (5) Data frame handling I0715 13:00:27.567118 6 log.go:172] (0xc002c46b00) Data frame received for 1 I0715 13:00:27.567155 6 log.go:172] (0xc001c68320) (1) Data frame handling I0715 13:00:27.567185 6 log.go:172] (0xc001c68320) (1) Data frame sent I0715 13:00:27.567209 6 log.go:172] (0xc002c46b00) (0xc001c68320) Stream removed, broadcasting: 1 I0715 13:00:27.567232 6 log.go:172] (0xc002c46b00) Go away received I0715 13:00:27.567733 6 log.go:172] (0xc002c46b00) (0xc001c68320) Stream removed, broadcasting: 1 I0715 13:00:27.567761 6 log.go:172] (0xc002c46b00) (0xc0021f4e60) Stream removed, broadcasting: 3 I0715 13:00:27.567793 6 log.go:172] (0xc002c46b00) (0xc001c683c0) Stream removed, broadcasting: 5 Jul 15 13:00:27.567: INFO: Waiting for endpoints: map[] Jul 15 13:00:27.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.3:8080/dial?request=hostName&protocol=udp&host=10.244.1.106&port=8081&tries=1'] Namespace:pod-network-test-2554 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:00:27.571: INFO: >>> kubeConfig: /root/.kube/config I0715 13:00:27.608454 6 log.go:172] (0xc002c476b0) (0xc001c686e0) Create stream I0715 13:00:27.608477 6 log.go:172] (0xc002c476b0) (0xc001c686e0) Stream added, broadcasting: 1 I0715 13:00:27.610774 6 log.go:172] (0xc002c476b0) Reply frame received for 1 I0715 13:00:27.610814 6 log.go:172] (0xc002c476b0) (0xc001346140) Create stream I0715 13:00:27.610827 6 log.go:172] (0xc002c476b0) (0xc001346140) Stream added, broadcasting: 3 I0715 13:00:27.611987 6 log.go:172] (0xc002c476b0) Reply frame received for 3 I0715 13:00:27.612040 6 log.go:172] (0xc002c476b0) (0xc001c68780) Create stream I0715 13:00:27.612056 6 log.go:172] (0xc002c476b0) (0xc001c68780) Stream added, broadcasting: 5 I0715 13:00:27.613697 6 log.go:172] (0xc002c476b0) Reply frame received for 5 I0715 13:00:27.687472 6 log.go:172] (0xc002c476b0) Data frame received for 3 I0715 13:00:27.687503 6 log.go:172] (0xc001346140) (3) Data frame handling I0715 13:00:27.687523 6 log.go:172] (0xc001346140) (3) Data frame sent I0715 13:00:27.688243 6 log.go:172] (0xc002c476b0) Data frame received for 5 I0715 13:00:27.688328 6 log.go:172] (0xc001c68780) (5) Data frame handling I0715 13:00:27.688355 6 log.go:172] (0xc002c476b0) Data frame received for 3 I0715 13:00:27.688364 6 log.go:172] (0xc001346140) (3) Data frame handling I0715 13:00:27.689803 6 log.go:172] (0xc002c476b0) Data frame received for 1 I0715 13:00:27.689830 6 log.go:172] (0xc001c686e0) (1) Data frame handling I0715 13:00:27.689851 6 log.go:172] (0xc001c686e0) (1) Data frame sent I0715 13:00:27.689875 6 log.go:172] (0xc002c476b0) (0xc001c686e0) Stream removed, broadcasting: 1 I0715 13:00:27.689894 6 log.go:172] (0xc002c476b0) Go away received I0715 13:00:27.690035 6 log.go:172] (0xc002c476b0) (0xc001c686e0) Stream removed, broadcasting: 1 I0715 13:00:27.690091 6 log.go:172] (0xc002c476b0) (0xc001346140) Stream removed, broadcasting: 3 I0715 13:00:27.690136 6 log.go:172] (0xc002c476b0) (0xc001c68780) Stream removed, broadcasting: 5 Jul 15 13:00:27.690: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:00:27.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2554" for this suite. Jul 15 13:00:51.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:00:51.773: INFO: namespace pod-network-test-2554 deletion completed in 24.078786503s • [SLOW TEST:52.527 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:00:51.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 15 13:00:58.201: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-42fb8faf-224d-42c4-aa34-e3ae584b19f4,GenerateName:,Namespace:events-1524,SelfLink:/api/v1/namespaces/events-1524/pods/send-events-42fb8faf-224d-42c4-aa34-e3ae584b19f4,UID:ed6f6d81-2124-4351-99a8-01d64cd3f311,ResourceVersion:1017356,Generation:0,CreationTimestamp:2020-07-15 13:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 99985976,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-r2p4q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r2p4q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-r2p4q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029f5430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029f5450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:00:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:00:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:00:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:00:52 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.107,StartTime:2020-07-15 13:00:52 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-15 13:00:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://89cc320819611629084968ec8ab53cbeaa44d44022e89c1387c73064469f2bd8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jul 15 13:01:00.205: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 15 13:01:02.225: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:01:02.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1524" for this suite. Jul 15 13:01:40.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:01:40.348: INFO: namespace events-1524 deletion completed in 38.10710708s • [SLOW TEST:48.575 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:01:40.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 15 13:01:40.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9316,SelfLink:/api/v1/namespaces/watch-9316/configmaps/e2e-watch-test-resource-version,UID:54d6846b-aea7-4f0a-8c42-cad13427a71e,ResourceVersion:1017482,Generation:0,CreationTimestamp:2020-07-15 13:01:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 15 13:01:40.501: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9316,SelfLink:/api/v1/namespaces/watch-9316/configmaps/e2e-watch-test-resource-version,UID:54d6846b-aea7-4f0a-8c42-cad13427a71e,ResourceVersion:1017483,Generation:0,CreationTimestamp:2020-07-15 13:01:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:01:40.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9316" for this suite. Jul 15 13:01:46.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:01:46.643: INFO: namespace watch-9316 deletion completed in 6.124938444s • [SLOW TEST:6.295 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:01:46.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-dff4a6d4-90a4-4049-ab60-964d5bf05798 STEP: Creating a pod to test consume secrets Jul 15 13:01:46.775: INFO: Waiting up to 5m0s for pod "pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458" in namespace "secrets-3998" to be "success or failure" Jul 15 13:01:46.778: INFO: Pod "pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108291ms Jul 15 13:01:48.900: INFO: Pod "pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125234547s Jul 15 13:01:50.903: INFO: Pod "pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128657354s STEP: Saw pod success Jul 15 13:01:50.903: INFO: Pod "pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458" satisfied condition "success or failure" Jul 15 13:01:50.906: INFO: Trying to get logs from node iruya-worker pod pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458 container secret-volume-test: STEP: delete the pod Jul 15 13:01:51.045: INFO: Waiting for pod pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458 to disappear Jul 15 13:01:51.055: INFO: Pod pod-secrets-218a1e64-504f-4b2a-8972-04ded5a15458 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:01:51.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3998" for this suite. Jul 15 13:01:57.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:01:57.147: INFO: namespace secrets-3998 deletion completed in 6.088776311s • [SLOW TEST:10.504 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:01:57.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:01:57.283: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b" in namespace "projected-5050" to be "success or failure" Jul 15 13:01:57.336: INFO: Pod "downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.638794ms Jul 15 13:01:59.339: INFO: Pod "downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056171512s Jul 15 13:02:01.344: INFO: Pod "downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060437059s STEP: Saw pod success Jul 15 13:02:01.344: INFO: Pod "downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b" satisfied condition "success or failure" Jul 15 13:02:01.346: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b container client-container: STEP: delete the pod Jul 15 13:02:01.367: INFO: Waiting for pod downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b to disappear Jul 15 13:02:01.371: INFO: Pod downwardapi-volume-b0d34a3f-d38a-4724-a106-6dcbc53d358b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:02:01.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5050" for this suite. Jul 15 13:02:07.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:02:07.492: INFO: namespace projected-5050 deletion completed in 6.116030608s • [SLOW TEST:10.344 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:02:07.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0715 13:02:48.422844 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 15 13:02:48.422: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:02:48.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6894" for this suite. Jul 15 13:02:58.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:02:58.514: INFO: namespace gc-6894 deletion completed in 10.087732423s • [SLOW TEST:51.022 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:02:58.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7418 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-7418 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7418 Jul 15 13:02:58.642: INFO: Found 0 stateful pods, waiting for 1 Jul 15 13:03:08.646: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 15 13:03:08.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7418 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 13:03:08.914: INFO: stderr: "I0715 13:03:08.774020 332 log.go:172] (0xc000130f20) (0xc000666aa0) Create stream\nI0715 13:03:08.774083 332 log.go:172] (0xc000130f20) (0xc000666aa0) Stream added, broadcasting: 1\nI0715 13:03:08.776257 332 log.go:172] (0xc000130f20) Reply frame received for 1\nI0715 13:03:08.776311 332 log.go:172] (0xc000130f20) (0xc00081a000) Create stream\nI0715 13:03:08.776327 332 log.go:172] (0xc000130f20) (0xc00081a000) Stream added, broadcasting: 3\nI0715 13:03:08.777251 332 log.go:172] (0xc000130f20) Reply frame received for 3\nI0715 13:03:08.777276 332 log.go:172] (0xc000130f20) (0xc000666b40) Create stream\nI0715 13:03:08.777283 332 log.go:172] (0xc000130f20) (0xc000666b40) Stream added, broadcasting: 5\nI0715 13:03:08.778011 332 log.go:172] (0xc000130f20) Reply frame received for 5\nI0715 13:03:08.862686 332 log.go:172] (0xc000130f20) Data frame received for 5\nI0715 13:03:08.862714 332 log.go:172] (0xc000666b40) (5) Data frame handling\nI0715 13:03:08.862728 332 log.go:172] (0xc000666b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 13:03:08.908439 332 log.go:172] (0xc000130f20) Data frame received for 3\nI0715 13:03:08.908473 332 log.go:172] (0xc00081a000) (3) Data frame handling\nI0715 13:03:08.908493 332 log.go:172] (0xc00081a000) (3) Data frame sent\nI0715 13:03:08.908595 332 log.go:172] (0xc000130f20) Data frame received for 3\nI0715 13:03:08.908612 332 log.go:172] (0xc00081a000) (3) Data frame handling\nI0715 13:03:08.908952 332 log.go:172] (0xc000130f20) Data frame received for 5\nI0715 13:03:08.908970 332 log.go:172] (0xc000666b40) (5) Data frame handling\nI0715 13:03:08.910313 332 log.go:172] (0xc000130f20) Data frame received for 1\nI0715 13:03:08.910328 332 log.go:172] (0xc000666aa0) (1) Data frame handling\nI0715 13:03:08.910335 332 log.go:172] (0xc000666aa0) (1) Data frame sent\nI0715 13:03:08.910350 332 log.go:172] (0xc000130f20) (0xc000666aa0) Stream removed, broadcasting: 1\nI0715 13:03:08.910619 332 log.go:172] (0xc000130f20) (0xc000666aa0) Stream removed, broadcasting: 1\nI0715 13:03:08.910631 332 log.go:172] (0xc000130f20) (0xc00081a000) Stream removed, broadcasting: 3\nI0715 13:03:08.910636 332 log.go:172] (0xc000130f20) (0xc000666b40) Stream removed, broadcasting: 5\n" Jul 15 13:03:08.914: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 13:03:08.914: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 13:03:08.917: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 15 13:03:18.922: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 13:03:18.922: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 13:03:18.944: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 13:03:18.944: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC }] Jul 15 13:03:18.944: INFO: Jul 15 13:03:18.944: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 15 13:03:19.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988212256s Jul 15 13:03:21.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983385249s Jul 15 13:03:22.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.748848811s Jul 15 13:03:23.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.70093104s Jul 15 13:03:24.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.672103102s Jul 15 13:03:25.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.666618958s Jul 15 13:03:26.275: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.661505751s Jul 15 13:03:27.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.656558616s Jul 15 13:03:28.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 651.373613ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7418 Jul 15 13:03:29.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7418 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 13:03:29.713: INFO: stderr: "I0715 13:03:29.642446 353 log.go:172] (0xc000514c60) (0xc000690c80) Create stream\nI0715 13:03:29.642491 353 log.go:172] (0xc000514c60) (0xc000690c80) Stream added, broadcasting: 1\nI0715 13:03:29.645715 353 log.go:172] (0xc000514c60) Reply frame received for 1\nI0715 13:03:29.645755 353 log.go:172] (0xc000514c60) (0xc0006903c0) Create stream\nI0715 13:03:29.645767 353 log.go:172] (0xc000514c60) (0xc0006903c0) Stream added, broadcasting: 3\nI0715 13:03:29.646647 353 log.go:172] (0xc000514c60) Reply frame received for 3\nI0715 13:03:29.646717 353 log.go:172] (0xc000514c60) (0xc0002ce000) Create stream\nI0715 13:03:29.646738 353 log.go:172] (0xc000514c60) (0xc0002ce000) Stream added, broadcasting: 5\nI0715 13:03:29.647774 353 log.go:172] (0xc000514c60) Reply frame received for 5\nI0715 13:03:29.707147 353 log.go:172] (0xc000514c60) Data frame received for 3\nI0715 13:03:29.707196 353 log.go:172] (0xc0006903c0) (3) Data frame handling\nI0715 13:03:29.707234 353 log.go:172] (0xc0006903c0) (3) Data frame sent\nI0715 13:03:29.707483 353 log.go:172] (0xc000514c60) Data frame received for 5\nI0715 13:03:29.707519 353 log.go:172] (0xc0002ce000) (5) Data frame handling\nI0715 13:03:29.707534 353 log.go:172] (0xc0002ce000) (5) Data frame sent\nI0715 13:03:29.707546 353 log.go:172] (0xc000514c60) Data frame received for 5\nI0715 13:03:29.707556 353 log.go:172] (0xc0002ce000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0715 13:03:29.707590 353 log.go:172] (0xc000514c60) Data frame received for 3\nI0715 13:03:29.707609 353 log.go:172] (0xc0006903c0) (3) Data frame handling\nI0715 13:03:29.709378 353 log.go:172] (0xc000514c60) Data frame received for 1\nI0715 13:03:29.709418 353 log.go:172] (0xc000690c80) (1) Data frame handling\nI0715 13:03:29.709439 353 log.go:172] (0xc000690c80) (1) Data frame sent\nI0715 13:03:29.709471 353 log.go:172] (0xc000514c60) (0xc000690c80) Stream removed, broadcasting: 1\nI0715 13:03:29.709501 353 log.go:172] (0xc000514c60) Go away received\nI0715 13:03:29.709840 353 log.go:172] (0xc000514c60) (0xc000690c80) Stream removed, broadcasting: 1\nI0715 13:03:29.709859 353 log.go:172] (0xc000514c60) (0xc0006903c0) Stream removed, broadcasting: 3\nI0715 13:03:29.709866 353 log.go:172] (0xc000514c60) (0xc0002ce000) Stream removed, broadcasting: 5\n" Jul 15 13:03:29.713: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 13:03:29.713: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 13:03:29.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7418 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 13:03:29.975: INFO: stderr: "I0715 13:03:29.905483 375 log.go:172] (0xc000116fd0) (0xc0007b8820) Create stream\nI0715 13:03:29.905553 375 log.go:172] (0xc000116fd0) (0xc0007b8820) Stream added, broadcasting: 1\nI0715 13:03:29.908614 375 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0715 13:03:29.908656 375 log.go:172] (0xc000116fd0) (0xc0007b8000) Create stream\nI0715 13:03:29.908668 375 log.go:172] (0xc000116fd0) (0xc0007b8000) Stream added, broadcasting: 3\nI0715 13:03:29.909599 375 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0715 13:03:29.909647 375 log.go:172] (0xc000116fd0) (0xc0006741e0) Create stream\nI0715 13:03:29.909671 375 log.go:172] (0xc000116fd0) (0xc0006741e0) Stream added, broadcasting: 5\nI0715 13:03:29.910512 375 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0715 13:03:29.967779 375 log.go:172] (0xc000116fd0) Data frame received for 3\nI0715 13:03:29.967836 375 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0715 13:03:29.967853 375 log.go:172] (0xc0007b8000) (3) Data frame sent\nI0715 13:03:29.967864 375 log.go:172] (0xc000116fd0) Data frame received for 3\nI0715 13:03:29.967875 375 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0715 13:03:29.967897 375 log.go:172] (0xc000116fd0) Data frame received for 5\nI0715 13:03:29.967911 375 log.go:172] (0xc0006741e0) (5) Data frame handling\nI0715 13:03:29.967919 375 log.go:172] (0xc0006741e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0715 13:03:29.968115 375 log.go:172] (0xc000116fd0) Data frame received for 5\nI0715 13:03:29.968139 375 log.go:172] (0xc0006741e0) (5) Data frame handling\nI0715 13:03:29.970506 375 log.go:172] (0xc000116fd0) Data frame received for 1\nI0715 13:03:29.970526 375 log.go:172] (0xc0007b8820) (1) Data frame handling\nI0715 13:03:29.970545 375 log.go:172] (0xc0007b8820) (1) Data frame sent\nI0715 13:03:29.970557 375 log.go:172] (0xc000116fd0) (0xc0007b8820) Stream removed, broadcasting: 1\nI0715 13:03:29.970566 375 log.go:172] (0xc000116fd0) Go away received\nI0715 13:03:29.971202 375 log.go:172] (0xc000116fd0) (0xc0007b8820) Stream removed, broadcasting: 1\nI0715 13:03:29.971230 375 log.go:172] (0xc000116fd0) (0xc0007b8000) Stream removed, broadcasting: 3\nI0715 13:03:29.971247 375 log.go:172] (0xc000116fd0) (0xc0006741e0) Stream removed, broadcasting: 5\n" Jul 15 13:03:29.975: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 13:03:29.975: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 13:03:29.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7418 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 13:03:30.164: INFO: stderr: "I0715 13:03:30.095287 394 log.go:172] (0xc00012adc0) (0xc0003fc6e0) Create stream\nI0715 13:03:30.095341 394 log.go:172] (0xc00012adc0) (0xc0003fc6e0) Stream added, broadcasting: 1\nI0715 13:03:30.098721 394 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0715 13:03:30.098776 394 log.go:172] (0xc00012adc0) (0xc0003fc000) Create stream\nI0715 13:03:30.098801 394 log.go:172] (0xc00012adc0) (0xc0003fc000) Stream added, broadcasting: 3\nI0715 13:03:30.099926 394 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0715 13:03:30.099972 394 log.go:172] (0xc00012adc0) (0xc0002b6320) Create stream\nI0715 13:03:30.100003 394 log.go:172] (0xc00012adc0) (0xc0002b6320) Stream added, broadcasting: 5\nI0715 13:03:30.101126 394 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0715 13:03:30.158809 394 log.go:172] (0xc00012adc0) Data frame received for 5\nI0715 13:03:30.158832 394 log.go:172] (0xc0002b6320) (5) Data frame handling\nI0715 13:03:30.158840 394 log.go:172] (0xc0002b6320) (5) Data frame sent\nI0715 13:03:30.158846 394 log.go:172] (0xc00012adc0) Data frame received for 5\nI0715 13:03:30.158851 394 log.go:172] (0xc0002b6320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0715 13:03:30.158862 394 log.go:172] (0xc00012adc0) Data frame received for 3\nI0715 13:03:30.158867 394 log.go:172] (0xc0003fc000) (3) Data frame handling\nI0715 13:03:30.158878 394 log.go:172] (0xc0003fc000) (3) Data frame sent\nI0715 13:03:30.158884 394 log.go:172] (0xc00012adc0) Data frame received for 3\nI0715 13:03:30.158889 394 log.go:172] (0xc0003fc000) (3) Data frame handling\nI0715 13:03:30.160215 394 log.go:172] (0xc00012adc0) Data frame received for 1\nI0715 13:03:30.160275 394 log.go:172] (0xc0003fc6e0) (1) Data frame handling\nI0715 13:03:30.160308 394 log.go:172] (0xc0003fc6e0) (1) Data frame sent\nI0715 13:03:30.160324 394 log.go:172] (0xc00012adc0) (0xc0003fc6e0) Stream removed, broadcasting: 1\nI0715 13:03:30.160385 394 log.go:172] (0xc00012adc0) Go away received\nI0715 13:03:30.160625 394 log.go:172] (0xc00012adc0) (0xc0003fc6e0) Stream removed, broadcasting: 1\nI0715 13:03:30.160647 394 log.go:172] (0xc00012adc0) (0xc0003fc000) Stream removed, broadcasting: 3\nI0715 13:03:30.160657 394 log.go:172] (0xc00012adc0) (0xc0002b6320) Stream removed, broadcasting: 5\n" Jul 15 13:03:30.164: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 13:03:30.164: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 13:03:30.169: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:03:30.169: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:03:30.169: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 15 13:03:30.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7418 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 13:03:30.388: INFO: stderr: "I0715 13:03:30.308412 415 log.go:172] (0xc000a7e630) (0xc0005f0a00) Create stream\nI0715 13:03:30.308486 415 log.go:172] (0xc000a7e630) (0xc0005f0a00) Stream added, broadcasting: 1\nI0715 13:03:30.312297 415 log.go:172] (0xc000a7e630) Reply frame received for 1\nI0715 13:03:30.312346 415 log.go:172] (0xc000a7e630) (0xc0005f0280) Create stream\nI0715 13:03:30.312373 415 log.go:172] (0xc000a7e630) (0xc0005f0280) Stream added, broadcasting: 3\nI0715 13:03:30.313374 415 log.go:172] (0xc000a7e630) Reply frame received for 3\nI0715 13:03:30.313426 415 log.go:172] (0xc000a7e630) (0xc000636000) Create stream\nI0715 13:03:30.313454 415 log.go:172] (0xc000a7e630) (0xc000636000) Stream added, broadcasting: 5\nI0715 13:03:30.314285 415 log.go:172] (0xc000a7e630) Reply frame received for 5\nI0715 13:03:30.383340 415 log.go:172] (0xc000a7e630) Data frame received for 3\nI0715 13:03:30.383383 415 log.go:172] (0xc0005f0280) (3) Data frame handling\nI0715 13:03:30.383398 415 log.go:172] (0xc0005f0280) (3) Data frame sent\nI0715 13:03:30.383423 415 log.go:172] (0xc000a7e630) Data frame received for 5\nI0715 13:03:30.383437 415 log.go:172] (0xc000636000) (5) Data frame handling\nI0715 13:03:30.383453 415 log.go:172] (0xc000636000) (5) Data frame sent\nI0715 13:03:30.383463 415 log.go:172] (0xc000a7e630) Data frame received for 5\nI0715 13:03:30.383472 415 log.go:172] (0xc000636000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 13:03:30.383550 415 log.go:172] (0xc000a7e630) Data frame received for 3\nI0715 13:03:30.383586 415 log.go:172] (0xc0005f0280) (3) Data frame handling\nI0715 13:03:30.385114 415 log.go:172] (0xc000a7e630) Data frame received for 1\nI0715 13:03:30.385143 415 log.go:172] (0xc0005f0a00) (1) Data frame handling\nI0715 13:03:30.385164 415 log.go:172] (0xc0005f0a00) (1) Data frame sent\nI0715 13:03:30.385184 415 log.go:172] (0xc000a7e630) (0xc0005f0a00) Stream removed, broadcasting: 1\nI0715 13:03:30.385202 415 log.go:172] (0xc000a7e630) Go away received\nI0715 13:03:30.385656 415 log.go:172] (0xc000a7e630) (0xc0005f0a00) Stream removed, broadcasting: 1\nI0715 13:03:30.385684 415 log.go:172] (0xc000a7e630) (0xc0005f0280) Stream removed, broadcasting: 3\nI0715 13:03:30.385714 415 log.go:172] (0xc000a7e630) (0xc000636000) Stream removed, broadcasting: 5\n" Jul 15 13:03:30.388: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 13:03:30.388: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 13:03:30.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7418 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 13:03:30.660: INFO: stderr: "I0715 13:03:30.516278 436 log.go:172] (0xc000116dc0) (0xc0006a4820) Create stream\nI0715 13:03:30.516364 436 log.go:172] (0xc000116dc0) (0xc0006a4820) Stream added, broadcasting: 1\nI0715 13:03:30.521075 436 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0715 13:03:30.521111 436 log.go:172] (0xc000116dc0) (0xc0006a4000) Create stream\nI0715 13:03:30.521129 436 log.go:172] (0xc000116dc0) (0xc0006a4000) Stream added, broadcasting: 3\nI0715 13:03:30.522113 436 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0715 13:03:30.522147 436 log.go:172] (0xc000116dc0) (0xc0005b41e0) Create stream\nI0715 13:03:30.522157 436 log.go:172] (0xc000116dc0) (0xc0005b41e0) Stream added, broadcasting: 5\nI0715 13:03:30.523008 436 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0715 13:03:30.567982 436 log.go:172] (0xc000116dc0) Data frame received for 5\nI0715 13:03:30.568002 436 log.go:172] (0xc0005b41e0) (5) Data frame handling\nI0715 13:03:30.568016 436 log.go:172] (0xc0005b41e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 13:03:30.655800 436 log.go:172] (0xc000116dc0) Data frame received for 3\nI0715 13:03:30.655931 436 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0715 13:03:30.655976 436 log.go:172] (0xc0006a4000) (3) Data frame sent\nI0715 13:03:30.656154 436 log.go:172] (0xc000116dc0) Data frame received for 3\nI0715 13:03:30.656179 436 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0715 13:03:30.656244 436 log.go:172] (0xc000116dc0) Data frame received for 5\nI0715 13:03:30.656297 436 log.go:172] (0xc0005b41e0) (5) Data frame handling\nI0715 13:03:30.657763 436 log.go:172] (0xc000116dc0) Data frame received for 1\nI0715 13:03:30.657777 436 log.go:172] (0xc0006a4820) (1) Data frame handling\nI0715 13:03:30.657783 436 log.go:172] (0xc0006a4820) (1) Data frame sent\nI0715 13:03:30.657796 436 log.go:172] (0xc000116dc0) (0xc0006a4820) Stream removed, broadcasting: 1\nI0715 13:03:30.657831 436 log.go:172] (0xc000116dc0) Go away received\nI0715 13:03:30.657978 436 log.go:172] (0xc000116dc0) (0xc0006a4820) Stream removed, broadcasting: 1\nI0715 13:03:30.657990 436 log.go:172] (0xc000116dc0) (0xc0006a4000) Stream removed, broadcasting: 3\nI0715 13:03:30.658002 436 log.go:172] (0xc000116dc0) (0xc0005b41e0) Stream removed, broadcasting: 5\n" Jul 15 13:03:30.661: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 13:03:30.661: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 13:03:30.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7418 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 13:03:30.888: INFO: stderr: "I0715 13:03:30.776982 455 log.go:172] (0xc000aee370) (0xc00061a820) Create stream\nI0715 13:03:30.777061 455 log.go:172] (0xc000aee370) (0xc00061a820) Stream added, broadcasting: 1\nI0715 13:03:30.780229 455 log.go:172] (0xc000aee370) Reply frame received for 1\nI0715 13:03:30.780266 455 log.go:172] (0xc000aee370) (0xc00061a000) Create stream\nI0715 13:03:30.780283 455 log.go:172] (0xc000aee370) (0xc00061a000) Stream added, broadcasting: 3\nI0715 13:03:30.781142 455 log.go:172] (0xc000aee370) Reply frame received for 3\nI0715 13:03:30.781183 455 log.go:172] (0xc000aee370) (0xc00039a140) Create stream\nI0715 13:03:30.781202 455 log.go:172] (0xc000aee370) (0xc00039a140) Stream added, broadcasting: 5\nI0715 13:03:30.781862 455 log.go:172] (0xc000aee370) Reply frame received for 5\nI0715 13:03:30.847497 455 log.go:172] (0xc000aee370) Data frame received for 5\nI0715 13:03:30.847517 455 log.go:172] (0xc00039a140) (5) Data frame handling\nI0715 13:03:30.847529 455 log.go:172] (0xc00039a140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 13:03:30.883280 455 log.go:172] (0xc000aee370) Data frame received for 3\nI0715 13:03:30.883311 455 log.go:172] (0xc00061a000) (3) Data frame handling\nI0715 13:03:30.883338 455 log.go:172] (0xc00061a000) (3) Data frame sent\nI0715 13:03:30.883720 455 log.go:172] (0xc000aee370) Data frame received for 3\nI0715 13:03:30.883766 455 log.go:172] (0xc00061a000) (3) Data frame handling\nI0715 13:03:30.883797 455 log.go:172] (0xc000aee370) Data frame received for 5\nI0715 13:03:30.883821 455 log.go:172] (0xc00039a140) (5) Data frame handling\nI0715 13:03:30.884989 455 log.go:172] (0xc000aee370) Data frame received for 1\nI0715 13:03:30.885020 455 log.go:172] (0xc00061a820) (1) Data frame handling\nI0715 13:03:30.885048 455 log.go:172] (0xc00061a820) (1) Data frame sent\nI0715 13:03:30.885081 455 log.go:172] (0xc000aee370) (0xc00061a820) Stream removed, broadcasting: 1\nI0715 13:03:30.885110 455 log.go:172] (0xc000aee370) Go away received\nI0715 13:03:30.885457 455 log.go:172] (0xc000aee370) (0xc00061a820) Stream removed, broadcasting: 1\nI0715 13:03:30.885474 455 log.go:172] (0xc000aee370) (0xc00061a000) Stream removed, broadcasting: 3\nI0715 13:03:30.885484 455 log.go:172] (0xc000aee370) (0xc00039a140) Stream removed, broadcasting: 5\n" Jul 15 13:03:30.888: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 13:03:30.888: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 13:03:30.888: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 13:03:30.891: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jul 15 13:03:40.898: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 13:03:40.898: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 15 13:03:40.898: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 15 13:03:40.908: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 13:03:40.908: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC }] Jul 15 13:03:40.908: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:40.908: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:40.908: INFO: Jul 15 13:03:40.908: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 15 13:03:41.919: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 13:03:41.919: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC }] Jul 15 13:03:41.919: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:41.919: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:41.919: INFO: Jul 15 13:03:41.919: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 15 13:03:42.929: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 13:03:42.929: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:02:58 +0000 UTC }] Jul 15 13:03:42.929: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:42.929: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:42.929: INFO: Jul 15 13:03:42.929: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 15 13:03:43.934: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 13:03:43.934: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:43.934: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:43.934: INFO: Jul 15 13:03:43.934: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 13:03:44.938: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 13:03:44.938: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:44.938: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:44.938: INFO: Jul 15 13:03:44.938: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 13:03:45.943: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 13:03:45.943: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:45.943: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:03:18 +0000 UTC }] Jul 15 13:03:45.943: INFO: Jul 15 13:03:45.943: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 13:03:46.948: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.962194946s Jul 15 13:03:47.953: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.956565866s Jul 15 13:03:48.957: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.951890128s Jul 15 13:03:49.961: INFO: Verifying statefulset ss doesn't scale past 0 for another 947.832503ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7418 Jul 15 13:03:50.966: INFO: Scaling statefulset ss to 0 Jul 15 13:03:50.976: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 15 13:03:50.979: INFO: Deleting all statefulset in ns statefulset-7418 Jul 15 13:03:50.981: INFO: Scaling statefulset ss to 0 Jul 15 13:03:50.989: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 13:03:50.991: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:03:51.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7418" for this suite. Jul 15 13:03:57.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:03:57.107: INFO: namespace statefulset-7418 deletion completed in 6.098551683s • [SLOW TEST:58.592 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:03:57.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 15 13:04:05.236: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:05.258: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:07.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:07.263: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:09.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:09.263: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:11.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:11.263: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:13.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:13.263: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:15.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:15.262: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:17.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:17.263: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:19.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:19.263: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:21.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:21.263: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:23.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:23.262: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:25.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:25.262: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 13:04:27.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 13:04:27.262: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:04:27.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1751" for this suite. Jul 15 13:04:41.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:04:41.366: INFO: namespace container-lifecycle-hook-1751 deletion completed in 14.099596494s • [SLOW TEST:44.258 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:04:41.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 15 13:04:45.985: INFO: Successfully updated pod "pod-update-activedeadlineseconds-eadb9edf-dfa2-4a2a-bed5-18daeef04faf" Jul 15 13:04:45.985: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-eadb9edf-dfa2-4a2a-bed5-18daeef04faf" in namespace "pods-122" to be "terminated due to deadline exceeded" Jul 15 13:04:46.006: INFO: Pod "pod-update-activedeadlineseconds-eadb9edf-dfa2-4a2a-bed5-18daeef04faf": Phase="Running", Reason="", readiness=true. Elapsed: 21.45443ms Jul 15 13:04:48.009: INFO: Pod "pod-update-activedeadlineseconds-eadb9edf-dfa2-4a2a-bed5-18daeef04faf": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024809364s Jul 15 13:04:48.010: INFO: Pod "pod-update-activedeadlineseconds-eadb9edf-dfa2-4a2a-bed5-18daeef04faf" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:04:48.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-122" for this suite. Jul 15 13:04:54.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:04:54.171: INFO: namespace pods-122 deletion completed in 6.157290052s • [SLOW TEST:12.805 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:04:54.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 15 13:04:54.258: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:05:06.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-880" for this suite. Jul 15 13:05:12.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:05:12.897: INFO: namespace pods-880 deletion completed in 6.09120249s • [SLOW TEST:18.726 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:05:12.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-6bhn STEP: Creating a pod to test atomic-volume-subpath Jul 15 13:05:12.978: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6bhn" in namespace "subpath-8878" to be "success or failure" Jul 15 13:05:13.007: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Pending", Reason="", readiness=false. Elapsed: 28.887516ms Jul 15 13:05:15.202: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224139753s Jul 15 13:05:17.206: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 4.228476734s Jul 15 13:05:19.215: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 6.236915437s Jul 15 13:05:21.219: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 8.240888194s Jul 15 13:05:23.223: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 10.245070721s Jul 15 13:05:25.226: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 12.248366179s Jul 15 13:05:27.231: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 14.252805677s Jul 15 13:05:29.235: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 16.256981222s Jul 15 13:05:31.239: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 18.261123435s Jul 15 13:05:33.243: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 20.26457933s Jul 15 13:05:35.246: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Running", Reason="", readiness=true. Elapsed: 22.268439145s Jul 15 13:05:37.251: INFO: Pod "pod-subpath-test-secret-6bhn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.273264288s STEP: Saw pod success Jul 15 13:05:37.251: INFO: Pod "pod-subpath-test-secret-6bhn" satisfied condition "success or failure" Jul 15 13:05:37.254: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-6bhn container test-container-subpath-secret-6bhn: STEP: delete the pod Jul 15 13:05:37.277: INFO: Waiting for pod pod-subpath-test-secret-6bhn to disappear Jul 15 13:05:37.281: INFO: Pod pod-subpath-test-secret-6bhn no longer exists STEP: Deleting pod pod-subpath-test-secret-6bhn Jul 15 13:05:37.281: INFO: Deleting pod "pod-subpath-test-secret-6bhn" in namespace "subpath-8878" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:05:37.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8878" for this suite. Jul 15 13:05:43.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:05:43.386: INFO: namespace subpath-8878 deletion completed in 6.101240287s • [SLOW TEST:30.489 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:05:43.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 15 13:05:43.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4411' Jul 15 13:05:43.576: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 15 13:05:43.576: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jul 15 13:05:47.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4411' Jul 15 13:05:47.723: INFO: stderr: "" Jul 15 13:05:47.723: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:05:47.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4411" for this suite. Jul 15 13:06:09.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:06:09.831: INFO: namespace kubectl-4411 deletion completed in 22.103621957s • [SLOW TEST:26.443 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:06:09.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-95b76d5a-be96-4dd9-b19c-9122c6e43ae9 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:06:13.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-919" for this suite. Jul 15 13:06:36.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:06:36.072: INFO: namespace configmap-919 deletion completed in 22.114228205s • [SLOW TEST:26.241 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:06:36.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:06:36.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903" in namespace "downward-api-1677" to be "success or failure" Jul 15 13:06:36.241: INFO: Pod "downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903": Phase="Pending", Reason="", readiness=false. Elapsed: 3.681869ms Jul 15 13:06:38.246: INFO: Pod "downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008384264s Jul 15 13:06:40.250: INFO: Pod "downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012805119s STEP: Saw pod success Jul 15 13:06:40.250: INFO: Pod "downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903" satisfied condition "success or failure" Jul 15 13:06:40.253: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903 container client-container: STEP: delete the pod Jul 15 13:06:40.279: INFO: Waiting for pod downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903 to disappear Jul 15 13:06:40.294: INFO: Pod downwardapi-volume-9a8cfe91-2057-406a-ba2d-3963920cc903 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:06:40.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1677" for this suite. Jul 15 13:06:46.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:06:46.433: INFO: namespace downward-api-1677 deletion completed in 6.135759747s • [SLOW TEST:10.361 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:06:46.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 15 13:06:51.045: INFO: Successfully updated pod "pod-update-b026de16-7cb4-4ebb-8976-f3781a295749" STEP: verifying the updated pod is in kubernetes Jul 15 13:06:51.070: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:06:51.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9811" for this suite. Jul 15 13:07:13.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:07:13.172: INFO: namespace pods-9811 deletion completed in 22.097928182s • [SLOW TEST:26.738 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:07:13.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:07:13.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2919" for this suite. Jul 15 13:07:35.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:07:35.455: INFO: namespace pods-2919 deletion completed in 22.150201599s • [SLOW TEST:22.283 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:07:35.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:07:35.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c" in namespace "downward-api-4825" to be "success or failure" Jul 15 13:07:35.525: INFO: Pod "downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422797ms Jul 15 13:07:37.538: INFO: Pod "downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017978693s Jul 15 13:07:39.542: INFO: Pod "downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021353621s STEP: Saw pod success Jul 15 13:07:39.542: INFO: Pod "downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c" satisfied condition "success or failure" Jul 15 13:07:39.545: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c container client-container: STEP: delete the pod Jul 15 13:07:39.617: INFO: Waiting for pod downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c to disappear Jul 15 13:07:39.620: INFO: Pod downwardapi-volume-50be44a9-3848-4fb4-9d63-a649b60dad7c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:07:39.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4825" for this suite. Jul 15 13:07:45.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:07:45.713: INFO: namespace downward-api-4825 deletion completed in 6.089991965s • [SLOW TEST:10.257 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:07:45.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3239d420-37df-422c-8b6f-02c72efcd86e in namespace container-probe-620 Jul 15 13:07:49.797: INFO: Started pod busybox-3239d420-37df-422c-8b6f-02c72efcd86e in namespace container-probe-620 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 13:07:49.799: INFO: Initial restart count of pod busybox-3239d420-37df-422c-8b6f-02c72efcd86e is 0 Jul 15 13:08:38.334: INFO: Restart count of pod container-probe-620/busybox-3239d420-37df-422c-8b6f-02c72efcd86e is now 1 (48.534744543s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:08:38.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-620" for this suite. Jul 15 13:08:44.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:08:44.501: INFO: namespace container-probe-620 deletion completed in 6.124119347s • [SLOW TEST:58.788 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:08:44.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-3819cf4b-a0c0-4ef4-90ec-b4c3a446894b in namespace container-probe-1257 Jul 15 13:08:50.623: INFO: Started pod test-webserver-3819cf4b-a0c0-4ef4-90ec-b4c3a446894b in namespace container-probe-1257 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 13:08:50.625: INFO: Initial restart count of pod test-webserver-3819cf4b-a0c0-4ef4-90ec-b4c3a446894b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:12:51.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1257" for this suite. Jul 15 13:12:57.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:12:57.901: INFO: namespace container-probe-1257 deletion completed in 6.220941711s • [SLOW TEST:253.399 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:12:57.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jul 15 13:12:57.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8253' Jul 15 13:13:01.057: INFO: stderr: "" Jul 15 13:13:01.057: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 15 13:13:01.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8253' Jul 15 13:13:01.195: INFO: stderr: "" Jul 15 13:13:01.195: INFO: stdout: "update-demo-nautilus-74pw9 update-demo-nautilus-m74lr " Jul 15 13:13:01.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:01.285: INFO: stderr: "" Jul 15 13:13:01.285: INFO: stdout: "" Jul 15 13:13:01.285: INFO: update-demo-nautilus-74pw9 is created but not running Jul 15 13:13:06.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8253' Jul 15 13:13:06.378: INFO: stderr: "" Jul 15 13:13:06.378: INFO: stdout: "update-demo-nautilus-74pw9 update-demo-nautilus-m74lr " Jul 15 13:13:06.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:06.466: INFO: stderr: "" Jul 15 13:13:06.466: INFO: stdout: "true" Jul 15 13:13:06.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:06.560: INFO: stderr: "" Jul 15 13:13:06.560: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:13:06.560: INFO: validating pod update-demo-nautilus-74pw9 Jul 15 13:13:06.614: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:13:06.614: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:13:06.614: INFO: update-demo-nautilus-74pw9 is verified up and running Jul 15 13:13:06.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m74lr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:06.705: INFO: stderr: "" Jul 15 13:13:06.705: INFO: stdout: "true" Jul 15 13:13:06.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m74lr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:06.795: INFO: stderr: "" Jul 15 13:13:06.795: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:13:06.795: INFO: validating pod update-demo-nautilus-m74lr Jul 15 13:13:06.821: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:13:06.821: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:13:06.821: INFO: update-demo-nautilus-m74lr is verified up and running STEP: scaling down the replication controller Jul 15 13:13:06.824: INFO: scanned /root for discovery docs: Jul 15 13:13:06.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8253' Jul 15 13:13:07.954: INFO: stderr: "" Jul 15 13:13:07.954: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 15 13:13:07.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8253' Jul 15 13:13:08.054: INFO: stderr: "" Jul 15 13:13:08.054: INFO: stdout: "update-demo-nautilus-74pw9 update-demo-nautilus-m74lr " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 15 13:13:13.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8253' Jul 15 13:13:13.150: INFO: stderr: "" Jul 15 13:13:13.150: INFO: stdout: "update-demo-nautilus-74pw9 update-demo-nautilus-m74lr " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 15 13:13:18.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8253' Jul 15 13:13:18.245: INFO: stderr: "" Jul 15 13:13:18.245: INFO: stdout: "update-demo-nautilus-74pw9 " Jul 15 13:13:18.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:18.334: INFO: stderr: "" Jul 15 13:13:18.334: INFO: stdout: "true" Jul 15 13:13:18.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:18.445: INFO: stderr: "" Jul 15 13:13:18.445: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:13:18.445: INFO: validating pod update-demo-nautilus-74pw9 Jul 15 13:13:18.448: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:13:18.448: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:13:18.448: INFO: update-demo-nautilus-74pw9 is verified up and running STEP: scaling up the replication controller Jul 15 13:13:18.450: INFO: scanned /root for discovery docs: Jul 15 13:13:18.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8253' Jul 15 13:13:19.562: INFO: stderr: "" Jul 15 13:13:19.562: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 15 13:13:19.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8253' Jul 15 13:13:19.664: INFO: stderr: "" Jul 15 13:13:19.664: INFO: stdout: "update-demo-nautilus-74pw9 update-demo-nautilus-9t7dv " Jul 15 13:13:19.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:19.759: INFO: stderr: "" Jul 15 13:13:19.759: INFO: stdout: "true" Jul 15 13:13:19.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:19.850: INFO: stderr: "" Jul 15 13:13:19.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:13:19.850: INFO: validating pod update-demo-nautilus-74pw9 Jul 15 13:13:19.852: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:13:19.852: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:13:19.852: INFO: update-demo-nautilus-74pw9 is verified up and running Jul 15 13:13:19.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9t7dv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:20.033: INFO: stderr: "" Jul 15 13:13:20.033: INFO: stdout: "" Jul 15 13:13:20.033: INFO: update-demo-nautilus-9t7dv is created but not running Jul 15 13:13:25.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8253' Jul 15 13:13:25.128: INFO: stderr: "" Jul 15 13:13:25.128: INFO: stdout: "update-demo-nautilus-74pw9 update-demo-nautilus-9t7dv " Jul 15 13:13:25.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:25.225: INFO: stderr: "" Jul 15 13:13:25.225: INFO: stdout: "true" Jul 15 13:13:25.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74pw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:25.311: INFO: stderr: "" Jul 15 13:13:25.311: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:13:25.311: INFO: validating pod update-demo-nautilus-74pw9 Jul 15 13:13:25.315: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:13:25.315: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:13:25.315: INFO: update-demo-nautilus-74pw9 is verified up and running Jul 15 13:13:25.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9t7dv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:25.421: INFO: stderr: "" Jul 15 13:13:25.421: INFO: stdout: "true" Jul 15 13:13:25.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9t7dv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8253' Jul 15 13:13:25.511: INFO: stderr: "" Jul 15 13:13:25.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:13:25.511: INFO: validating pod update-demo-nautilus-9t7dv Jul 15 13:13:25.515: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:13:25.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:13:25.515: INFO: update-demo-nautilus-9t7dv is verified up and running STEP: using delete to clean up resources Jul 15 13:13:25.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8253' Jul 15 13:13:25.609: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:13:25.609: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 15 13:13:25.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8253' Jul 15 13:13:25.710: INFO: stderr: "No resources found.\n" Jul 15 13:13:25.710: INFO: stdout: "" Jul 15 13:13:25.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8253 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 15 13:13:25.815: INFO: stderr: "" Jul 15 13:13:25.815: INFO: stdout: "update-demo-nautilus-74pw9\nupdate-demo-nautilus-9t7dv\n" Jul 15 13:13:26.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8253' Jul 15 13:13:26.414: INFO: stderr: "No resources found.\n" Jul 15 13:13:26.414: INFO: stdout: "" Jul 15 13:13:26.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8253 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 15 13:13:26.731: INFO: stderr: "" Jul 15 13:13:26.731: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:13:26.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8253" for this suite. Jul 15 13:13:48.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:13:48.873: INFO: namespace kubectl-8253 deletion completed in 22.138466368s • [SLOW TEST:50.972 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:13:48.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-8410 I0715 13:13:48.937217 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8410, replica count: 1 I0715 13:13:49.987751 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 13:13:50.987953 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 13:13:51.988182 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 13:13:52.988369 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 13:13:53.122: INFO: Created: latency-svc-p7pth Jul 15 13:13:53.136: INFO: Got endpoints: latency-svc-p7pth [48.406993ms] Jul 15 13:13:53.164: INFO: Created: latency-svc-knpxs Jul 15 13:13:53.247: INFO: Got endpoints: latency-svc-knpxs [110.783427ms] Jul 15 13:13:53.252: INFO: Created: latency-svc-snb5x Jul 15 13:13:53.275: INFO: Got endpoints: latency-svc-snb5x [138.151994ms] Jul 15 13:13:53.317: INFO: Created: latency-svc-9jj9w Jul 15 13:13:53.329: INFO: Got endpoints: latency-svc-9jj9w [192.561288ms] Jul 15 13:13:53.392: INFO: Created: latency-svc-w87h7 Jul 15 13:13:53.395: INFO: Got endpoints: latency-svc-w87h7 [258.613103ms] Jul 15 13:13:53.422: INFO: Created: latency-svc-2gncv Jul 15 13:13:53.444: INFO: Got endpoints: latency-svc-2gncv [306.703639ms] Jul 15 13:13:53.479: INFO: Created: latency-svc-xmg47 Jul 15 13:13:53.523: INFO: Got endpoints: latency-svc-xmg47 [385.878488ms] Jul 15 13:13:53.540: INFO: Created: latency-svc-5mg7b Jul 15 13:13:53.552: INFO: Got endpoints: latency-svc-5mg7b [415.231954ms] Jul 15 13:13:53.569: INFO: Created: latency-svc-r92hw Jul 15 13:13:53.579: INFO: Got endpoints: latency-svc-r92hw [442.538781ms] Jul 15 13:13:53.614: INFO: Created: latency-svc-cfdj4 Jul 15 13:13:53.661: INFO: Got endpoints: latency-svc-cfdj4 [524.553687ms] Jul 15 13:13:53.668: INFO: Created: latency-svc-22kbw Jul 15 13:13:53.682: INFO: Got endpoints: latency-svc-22kbw [545.12603ms] Jul 15 13:13:53.704: INFO: Created: latency-svc-x75mr Jul 15 13:13:53.718: INFO: Got endpoints: latency-svc-x75mr [581.367925ms] Jul 15 13:13:53.740: INFO: Created: latency-svc-p4449 Jul 15 13:13:53.754: INFO: Got endpoints: latency-svc-p4449 [617.661748ms] Jul 15 13:13:53.793: INFO: Created: latency-svc-f694f Jul 15 13:13:53.796: INFO: Got endpoints: latency-svc-f694f [659.002626ms] Jul 15 13:13:53.842: INFO: Created: latency-svc-4wjn4 Jul 15 13:13:53.857: INFO: Got endpoints: latency-svc-4wjn4 [720.032263ms] Jul 15 13:13:53.884: INFO: Created: latency-svc-ghlpw Jul 15 13:13:53.930: INFO: Got endpoints: latency-svc-ghlpw [793.040017ms] Jul 15 13:13:53.938: INFO: Created: latency-svc-bgqmp Jul 15 13:13:53.966: INFO: Got endpoints: latency-svc-bgqmp [718.578536ms] Jul 15 13:13:53.989: INFO: Created: latency-svc-g5pgx Jul 15 13:13:54.005: INFO: Got endpoints: latency-svc-g5pgx [729.765427ms] Jul 15 13:13:54.025: INFO: Created: latency-svc-68b84 Jul 15 13:13:54.085: INFO: Got endpoints: latency-svc-68b84 [755.856003ms] Jul 15 13:13:54.088: INFO: Created: latency-svc-scxmt Jul 15 13:13:54.095: INFO: Got endpoints: latency-svc-scxmt [699.757348ms] Jul 15 13:13:54.119: INFO: Created: latency-svc-7pk8c Jul 15 13:13:54.134: INFO: Got endpoints: latency-svc-7pk8c [690.759901ms] Jul 15 13:13:54.163: INFO: Created: latency-svc-c9r4r Jul 15 13:13:54.180: INFO: Got endpoints: latency-svc-c9r4r [657.109244ms] Jul 15 13:13:54.230: INFO: Created: latency-svc-s9mt4 Jul 15 13:13:54.240: INFO: Got endpoints: latency-svc-s9mt4 [687.674369ms] Jul 15 13:13:54.268: INFO: Created: latency-svc-kl7mt Jul 15 13:13:54.282: INFO: Got endpoints: latency-svc-kl7mt [703.113365ms] Jul 15 13:13:54.304: INFO: Created: latency-svc-76vk6 Jul 15 13:13:54.322: INFO: Got endpoints: latency-svc-76vk6 [660.677501ms] Jul 15 13:13:54.380: INFO: Created: latency-svc-thzp9 Jul 15 13:13:54.415: INFO: Got endpoints: latency-svc-thzp9 [732.837849ms] Jul 15 13:13:54.450: INFO: Created: latency-svc-484x6 Jul 15 13:13:54.511: INFO: Got endpoints: latency-svc-484x6 [792.785554ms] Jul 15 13:13:54.526: INFO: Created: latency-svc-fghzt Jul 15 13:13:54.541: INFO: Got endpoints: latency-svc-fghzt [786.413646ms] Jul 15 13:13:54.562: INFO: Created: latency-svc-28g4n Jul 15 13:13:54.571: INFO: Got endpoints: latency-svc-28g4n [775.574401ms] Jul 15 13:13:54.592: INFO: Created: latency-svc-9vh4q Jul 15 13:13:54.602: INFO: Got endpoints: latency-svc-9vh4q [744.632746ms] Jul 15 13:13:54.649: INFO: Created: latency-svc-qlq7t Jul 15 13:13:54.662: INFO: Got endpoints: latency-svc-qlq7t [732.189254ms] Jul 15 13:13:54.705: INFO: Created: latency-svc-524pg Jul 15 13:13:54.736: INFO: Got endpoints: latency-svc-524pg [769.662306ms] Jul 15 13:13:54.793: INFO: Created: latency-svc-4cc58 Jul 15 13:13:54.801: INFO: Got endpoints: latency-svc-4cc58 [795.963644ms] Jul 15 13:13:54.828: INFO: Created: latency-svc-cmk62 Jul 15 13:13:54.843: INFO: Got endpoints: latency-svc-cmk62 [757.709493ms] Jul 15 13:13:54.973: INFO: Created: latency-svc-knvkx Jul 15 13:13:54.981: INFO: Got endpoints: latency-svc-knvkx [885.596827ms] Jul 15 13:13:55.033: INFO: Created: latency-svc-2vr7m Jul 15 13:13:55.122: INFO: Got endpoints: latency-svc-2vr7m [987.37881ms] Jul 15 13:13:55.144: INFO: Created: latency-svc-5lx9g Jul 15 13:13:55.161: INFO: Got endpoints: latency-svc-5lx9g [981.36247ms] Jul 15 13:13:55.186: INFO: Created: latency-svc-cspvj Jul 15 13:13:55.201: INFO: Got endpoints: latency-svc-cspvj [960.636692ms] Jul 15 13:13:55.283: INFO: Created: latency-svc-zlzlj Jul 15 13:13:55.286: INFO: Got endpoints: latency-svc-zlzlj [1.003733809s] Jul 15 13:13:55.320: INFO: Created: latency-svc-2g6gg Jul 15 13:13:55.334: INFO: Got endpoints: latency-svc-2g6gg [1.011648684s] Jul 15 13:13:55.364: INFO: Created: latency-svc-wqkcs Jul 15 13:13:55.375: INFO: Got endpoints: latency-svc-wqkcs [959.954317ms] Jul 15 13:13:55.426: INFO: Created: latency-svc-kjbz9 Jul 15 13:13:55.442: INFO: Got endpoints: latency-svc-kjbz9 [930.86192ms] Jul 15 13:13:55.482: INFO: Created: latency-svc-pqffk Jul 15 13:13:55.496: INFO: Got endpoints: latency-svc-pqffk [954.528197ms] Jul 15 13:13:55.571: INFO: Created: latency-svc-n9s67 Jul 15 13:13:55.574: INFO: Got endpoints: latency-svc-n9s67 [1.002673938s] Jul 15 13:13:55.606: INFO: Created: latency-svc-857jw Jul 15 13:13:55.622: INFO: Got endpoints: latency-svc-857jw [1.020636162s] Jul 15 13:13:55.648: INFO: Created: latency-svc-cq4d4 Jul 15 13:13:55.708: INFO: Got endpoints: latency-svc-cq4d4 [1.04598502s] Jul 15 13:13:55.732: INFO: Created: latency-svc-jq87v Jul 15 13:13:55.749: INFO: Got endpoints: latency-svc-jq87v [1.01313858s] Jul 15 13:13:55.785: INFO: Created: latency-svc-btwnz Jul 15 13:13:55.792: INFO: Got endpoints: latency-svc-btwnz [991.491985ms] Jul 15 13:13:55.848: INFO: Created: latency-svc-n5dlx Jul 15 13:13:55.850: INFO: Got endpoints: latency-svc-n5dlx [1.00680601s] Jul 15 13:13:55.894: INFO: Created: latency-svc-58w7x Jul 15 13:13:55.924: INFO: Got endpoints: latency-svc-58w7x [942.925046ms] Jul 15 13:13:55.942: INFO: Created: latency-svc-msflq Jul 15 13:13:55.972: INFO: Got endpoints: latency-svc-msflq [850.052525ms] Jul 15 13:13:55.984: INFO: Created: latency-svc-7df97 Jul 15 13:13:55.996: INFO: Got endpoints: latency-svc-7df97 [835.039498ms] Jul 15 13:13:56.016: INFO: Created: latency-svc-zx9sb Jul 15 13:13:56.041: INFO: Got endpoints: latency-svc-zx9sb [840.382649ms] Jul 15 13:13:56.064: INFO: Created: latency-svc-gbrrw Jul 15 13:13:56.098: INFO: Got endpoints: latency-svc-gbrrw [811.425352ms] Jul 15 13:13:56.115: INFO: Created: latency-svc-cvmv7 Jul 15 13:13:56.129: INFO: Got endpoints: latency-svc-cvmv7 [795.701592ms] Jul 15 13:13:56.170: INFO: Created: latency-svc-8dtw8 Jul 15 13:13:56.242: INFO: Got endpoints: latency-svc-8dtw8 [866.726571ms] Jul 15 13:13:56.257: INFO: Created: latency-svc-7gbxp Jul 15 13:13:56.274: INFO: Got endpoints: latency-svc-7gbxp [832.050591ms] Jul 15 13:13:56.298: INFO: Created: latency-svc-ctwc5 Jul 15 13:13:56.309: INFO: Got endpoints: latency-svc-ctwc5 [813.614376ms] Jul 15 13:13:56.387: INFO: Created: latency-svc-2rslh Jul 15 13:13:56.389: INFO: Got endpoints: latency-svc-2rslh [814.83316ms] Jul 15 13:13:56.416: INFO: Created: latency-svc-wcqjj Jul 15 13:13:56.430: INFO: Got endpoints: latency-svc-wcqjj [807.430547ms] Jul 15 13:13:56.452: INFO: Created: latency-svc-4944g Jul 15 13:13:56.466: INFO: Got endpoints: latency-svc-4944g [757.689496ms] Jul 15 13:13:56.524: INFO: Created: latency-svc-g4hlf Jul 15 13:13:56.538: INFO: Got endpoints: latency-svc-g4hlf [788.810877ms] Jul 15 13:13:56.569: INFO: Created: latency-svc-nflhn Jul 15 13:13:56.581: INFO: Got endpoints: latency-svc-nflhn [788.471536ms] Jul 15 13:13:56.602: INFO: Created: latency-svc-l24z7 Jul 15 13:13:56.617: INFO: Got endpoints: latency-svc-l24z7 [767.286123ms] Jul 15 13:13:56.661: INFO: Created: latency-svc-r5psw Jul 15 13:13:56.663: INFO: Got endpoints: latency-svc-r5psw [739.410903ms] Jul 15 13:13:56.698: INFO: Created: latency-svc-tl2m9 Jul 15 13:13:56.713: INFO: Got endpoints: latency-svc-tl2m9 [741.151199ms] Jul 15 13:13:56.736: INFO: Created: latency-svc-pqffz Jul 15 13:13:56.760: INFO: Got endpoints: latency-svc-pqffz [763.238902ms] Jul 15 13:13:56.811: INFO: Created: latency-svc-zmbj9 Jul 15 13:13:56.815: INFO: Got endpoints: latency-svc-zmbj9 [773.689137ms] Jul 15 13:13:56.841: INFO: Created: latency-svc-n2xr8 Jul 15 13:13:56.853: INFO: Got endpoints: latency-svc-n2xr8 [754.990842ms] Jul 15 13:13:56.878: INFO: Created: latency-svc-l8xhh Jul 15 13:13:56.889: INFO: Got endpoints: latency-svc-l8xhh [759.393125ms] Jul 15 13:13:56.966: INFO: Created: latency-svc-42s8n Jul 15 13:13:56.969: INFO: Got endpoints: latency-svc-42s8n [727.167849ms] Jul 15 13:13:57.110: INFO: Created: latency-svc-xrncf Jul 15 13:13:57.115: INFO: Got endpoints: latency-svc-xrncf [840.81099ms] Jul 15 13:13:57.148: INFO: Created: latency-svc-57c9h Jul 15 13:13:57.159: INFO: Got endpoints: latency-svc-57c9h [849.787533ms] Jul 15 13:13:57.253: INFO: Created: latency-svc-cx5cq Jul 15 13:13:57.256: INFO: Got endpoints: latency-svc-cx5cq [866.850533ms] Jul 15 13:13:57.288: INFO: Created: latency-svc-k8s85 Jul 15 13:13:57.298: INFO: Got endpoints: latency-svc-k8s85 [868.119041ms] Jul 15 13:13:57.321: INFO: Created: latency-svc-875rp Jul 15 13:13:57.334: INFO: Got endpoints: latency-svc-875rp [867.787398ms] Jul 15 13:13:57.351: INFO: Created: latency-svc-4p8sw Jul 15 13:13:57.415: INFO: Got endpoints: latency-svc-4p8sw [877.185617ms] Jul 15 13:13:57.438: INFO: Created: latency-svc-hrqmb Jul 15 13:13:57.454: INFO: Got endpoints: latency-svc-hrqmb [873.771688ms] Jul 15 13:13:57.480: INFO: Created: latency-svc-bmw69 Jul 15 13:13:57.492: INFO: Got endpoints: latency-svc-bmw69 [874.294538ms] Jul 15 13:13:57.513: INFO: Created: latency-svc-dfrch Jul 15 13:13:57.558: INFO: Got endpoints: latency-svc-dfrch [894.8708ms] Jul 15 13:13:57.561: INFO: Created: latency-svc-j59m6 Jul 15 13:13:57.588: INFO: Got endpoints: latency-svc-j59m6 [874.39079ms] Jul 15 13:13:57.618: INFO: Created: latency-svc-62429 Jul 15 13:13:57.648: INFO: Got endpoints: latency-svc-62429 [888.070527ms] Jul 15 13:13:57.721: INFO: Created: latency-svc-6kvmf Jul 15 13:13:57.738: INFO: Got endpoints: latency-svc-6kvmf [922.990643ms] Jul 15 13:13:57.801: INFO: Created: latency-svc-2lq69 Jul 15 13:13:57.900: INFO: Got endpoints: latency-svc-2lq69 [1.047313493s] Jul 15 13:13:57.921: INFO: Created: latency-svc-mvln2 Jul 15 13:13:57.973: INFO: Got endpoints: latency-svc-mvln2 [1.083658408s] Jul 15 13:13:58.081: INFO: Created: latency-svc-qlknx Jul 15 13:13:58.084: INFO: Got endpoints: latency-svc-qlknx [1.114811943s] Jul 15 13:13:58.179: INFO: Created: latency-svc-qncsw Jul 15 13:13:58.277: INFO: Got endpoints: latency-svc-qncsw [1.162697219s] Jul 15 13:13:58.306: INFO: Created: latency-svc-dvh7v Jul 15 13:13:58.342: INFO: Got endpoints: latency-svc-dvh7v [1.183108438s] Jul 15 13:13:58.439: INFO: Created: latency-svc-z9pwq Jul 15 13:13:58.475: INFO: Got endpoints: latency-svc-z9pwq [1.218728163s] Jul 15 13:13:58.763: INFO: Created: latency-svc-lt6ql Jul 15 13:13:58.786: INFO: Got endpoints: latency-svc-lt6ql [1.488394294s] Jul 15 13:13:58.855: INFO: Created: latency-svc-hcnn6 Jul 15 13:13:58.919: INFO: Got endpoints: latency-svc-hcnn6 [1.584955339s] Jul 15 13:13:58.978: INFO: Created: latency-svc-7qr7p Jul 15 13:13:59.080: INFO: Got endpoints: latency-svc-7qr7p [1.664545808s] Jul 15 13:13:59.094: INFO: Created: latency-svc-9zs6s Jul 15 13:13:59.123: INFO: Got endpoints: latency-svc-9zs6s [1.668699535s] Jul 15 13:13:59.266: INFO: Created: latency-svc-vw6lg Jul 15 13:13:59.355: INFO: Got endpoints: latency-svc-vw6lg [275.275739ms] Jul 15 13:13:59.445: INFO: Created: latency-svc-xsr9h Jul 15 13:13:59.483: INFO: Got endpoints: latency-svc-xsr9h [1.991027309s] Jul 15 13:13:59.538: INFO: Created: latency-svc-xczk5 Jul 15 13:13:59.542: INFO: Got endpoints: latency-svc-xczk5 [1.984136158s] Jul 15 13:13:59.595: INFO: Created: latency-svc-n5jpt Jul 15 13:13:59.603: INFO: Got endpoints: latency-svc-n5jpt [2.015027344s] Jul 15 13:13:59.622: INFO: Created: latency-svc-4qwf8 Jul 15 13:13:59.651: INFO: Got endpoints: latency-svc-4qwf8 [2.002744022s] Jul 15 13:13:59.688: INFO: Created: latency-svc-d9njt Jul 15 13:13:59.750: INFO: Got endpoints: latency-svc-d9njt [2.012559032s] Jul 15 13:13:59.752: INFO: Created: latency-svc-xjgmk Jul 15 13:13:59.781: INFO: Got endpoints: latency-svc-xjgmk [1.881352481s] Jul 15 13:13:59.838: INFO: Created: latency-svc-bdp6p Jul 15 13:13:59.906: INFO: Got endpoints: latency-svc-bdp6p [1.933394793s] Jul 15 13:13:59.908: INFO: Created: latency-svc-g572h Jul 15 13:13:59.916: INFO: Got endpoints: latency-svc-g572h [1.831824499s] Jul 15 13:13:59.951: INFO: Created: latency-svc-xhlk2 Jul 15 13:13:59.958: INFO: Got endpoints: latency-svc-xhlk2 [1.680759768s] Jul 15 13:13:59.982: INFO: Created: latency-svc-ll558 Jul 15 13:13:59.995: INFO: Got endpoints: latency-svc-ll558 [1.652542193s] Jul 15 13:14:00.048: INFO: Created: latency-svc-8tsmm Jul 15 13:14:00.072: INFO: Got endpoints: latency-svc-8tsmm [1.596980835s] Jul 15 13:14:00.100: INFO: Created: latency-svc-m5cms Jul 15 13:14:00.115: INFO: Got endpoints: latency-svc-m5cms [1.328730851s] Jul 15 13:14:00.136: INFO: Created: latency-svc-ghcpn Jul 15 13:14:00.169: INFO: Got endpoints: latency-svc-ghcpn [1.250516514s] Jul 15 13:14:00.180: INFO: Created: latency-svc-xvh45 Jul 15 13:14:00.194: INFO: Got endpoints: latency-svc-xvh45 [1.070550621s] Jul 15 13:14:00.216: INFO: Created: latency-svc-whjwg Jul 15 13:14:00.230: INFO: Got endpoints: latency-svc-whjwg [875.018618ms] Jul 15 13:14:00.252: INFO: Created: latency-svc-wnwrk Jul 15 13:14:00.337: INFO: Got endpoints: latency-svc-wnwrk [854.095368ms] Jul 15 13:14:00.339: INFO: Created: latency-svc-m4pdk Jul 15 13:14:00.357: INFO: Got endpoints: latency-svc-m4pdk [814.093175ms] Jul 15 13:14:00.376: INFO: Created: latency-svc-nf8dp Jul 15 13:14:00.660: INFO: Got endpoints: latency-svc-nf8dp [1.057533661s] Jul 15 13:14:00.677: INFO: Created: latency-svc-tvwln Jul 15 13:14:00.693: INFO: Got endpoints: latency-svc-tvwln [1.042041619s] Jul 15 13:14:00.744: INFO: Created: latency-svc-9dbd7 Jul 15 13:14:00.753: INFO: Got endpoints: latency-svc-9dbd7 [1.002819957s] Jul 15 13:14:00.804: INFO: Created: latency-svc-2pc6j Jul 15 13:14:00.813: INFO: Got endpoints: latency-svc-2pc6j [1.031637071s] Jul 15 13:14:00.834: INFO: Created: latency-svc-9psw8 Jul 15 13:14:00.869: INFO: Got endpoints: latency-svc-9psw8 [963.271355ms] Jul 15 13:14:00.903: INFO: Created: latency-svc-gm64c Jul 15 13:14:00.978: INFO: Got endpoints: latency-svc-gm64c [1.062058314s] Jul 15 13:14:01.231: INFO: Created: latency-svc-mv7hg Jul 15 13:14:01.301: INFO: Got endpoints: latency-svc-mv7hg [1.343191263s] Jul 15 13:14:01.350: INFO: Created: latency-svc-m42vx Jul 15 13:14:01.445: INFO: Got endpoints: latency-svc-m42vx [1.450037499s] Jul 15 13:14:01.486: INFO: Created: latency-svc-qbhk2 Jul 15 13:14:01.517: INFO: Got endpoints: latency-svc-qbhk2 [1.444952436s] Jul 15 13:14:01.540: INFO: Created: latency-svc-zpds2 Jul 15 13:14:01.643: INFO: Got endpoints: latency-svc-zpds2 [1.527393828s] Jul 15 13:14:01.645: INFO: Created: latency-svc-fxpvd Jul 15 13:14:01.654: INFO: Got endpoints: latency-svc-fxpvd [1.484934961s] Jul 15 13:14:01.691: INFO: Created: latency-svc-thxd6 Jul 15 13:14:01.703: INFO: Got endpoints: latency-svc-thxd6 [1.508622326s] Jul 15 13:14:01.734: INFO: Created: latency-svc-jm29q Jul 15 13:14:01.768: INFO: Got endpoints: latency-svc-jm29q [1.538044772s] Jul 15 13:14:01.794: INFO: Created: latency-svc-nbws9 Jul 15 13:14:01.811: INFO: Got endpoints: latency-svc-nbws9 [1.474331637s] Jul 15 13:14:01.830: INFO: Created: latency-svc-l4v8w Jul 15 13:14:01.866: INFO: Got endpoints: latency-svc-l4v8w [1.50910867s] Jul 15 13:14:01.918: INFO: Created: latency-svc-ndzzl Jul 15 13:14:01.998: INFO: Got endpoints: latency-svc-ndzzl [1.337660132s] Jul 15 13:14:02.110: INFO: Created: latency-svc-hbjcw Jul 15 13:14:02.114: INFO: Got endpoints: latency-svc-hbjcw [1.420801531s] Jul 15 13:14:02.142: INFO: Created: latency-svc-jvmxq Jul 15 13:14:02.151: INFO: Got endpoints: latency-svc-jvmxq [1.397987005s] Jul 15 13:14:02.170: INFO: Created: latency-svc-jcfm8 Jul 15 13:14:02.181: INFO: Got endpoints: latency-svc-jcfm8 [1.368227002s] Jul 15 13:14:02.199: INFO: Created: latency-svc-d5vxc Jul 15 13:14:02.241: INFO: Got endpoints: latency-svc-d5vxc [1.37190538s] Jul 15 13:14:02.247: INFO: Created: latency-svc-ppp9k Jul 15 13:14:02.260: INFO: Got endpoints: latency-svc-ppp9k [1.281852143s] Jul 15 13:14:02.277: INFO: Created: latency-svc-vrdlb Jul 15 13:14:02.304: INFO: Got endpoints: latency-svc-vrdlb [1.00254107s] Jul 15 13:14:02.328: INFO: Created: latency-svc-5kpq9 Jul 15 13:14:02.339: INFO: Got endpoints: latency-svc-5kpq9 [893.668411ms] Jul 15 13:14:02.385: INFO: Created: latency-svc-sxgj8 Jul 15 13:14:02.387: INFO: Got endpoints: latency-svc-sxgj8 [870.577132ms] Jul 15 13:14:02.417: INFO: Created: latency-svc-5h2d8 Jul 15 13:14:02.429: INFO: Got endpoints: latency-svc-5h2d8 [786.741597ms] Jul 15 13:14:02.452: INFO: Created: latency-svc-g49pb Jul 15 13:14:02.466: INFO: Got endpoints: latency-svc-g49pb [811.15245ms] Jul 15 13:14:02.524: INFO: Created: latency-svc-m2t27 Jul 15 13:14:02.544: INFO: Got endpoints: latency-svc-m2t27 [841.786712ms] Jul 15 13:14:02.574: INFO: Created: latency-svc-9px2b Jul 15 13:14:02.593: INFO: Got endpoints: latency-svc-9px2b [824.326039ms] Jul 15 13:14:02.614: INFO: Created: latency-svc-ttkrr Jul 15 13:14:02.666: INFO: Got endpoints: latency-svc-ttkrr [854.856971ms] Jul 15 13:14:02.668: INFO: Created: latency-svc-p5s7f Jul 15 13:14:02.689: INFO: Got endpoints: latency-svc-p5s7f [823.246898ms] Jul 15 13:14:02.710: INFO: Created: latency-svc-wlt8p Jul 15 13:14:02.737: INFO: Got endpoints: latency-svc-wlt8p [739.238247ms] Jul 15 13:14:02.754: INFO: Created: latency-svc-wskbc Jul 15 13:14:02.828: INFO: Got endpoints: latency-svc-wskbc [714.285731ms] Jul 15 13:14:02.841: INFO: Created: latency-svc-bs7d5 Jul 15 13:14:02.858: INFO: Got endpoints: latency-svc-bs7d5 [706.346451ms] Jul 15 13:14:02.879: INFO: Created: latency-svc-bj28h Jul 15 13:14:02.888: INFO: Got endpoints: latency-svc-bj28h [706.564295ms] Jul 15 13:14:02.974: INFO: Created: latency-svc-78wbn Jul 15 13:14:02.982: INFO: Got endpoints: latency-svc-78wbn [740.310821ms] Jul 15 13:14:03.023: INFO: Created: latency-svc-fc8r2 Jul 15 13:14:03.045: INFO: Got endpoints: latency-svc-fc8r2 [785.049822ms] Jul 15 13:14:03.070: INFO: Created: latency-svc-wxdgf Jul 15 13:14:03.127: INFO: Got endpoints: latency-svc-wxdgf [823.351047ms] Jul 15 13:14:03.129: INFO: Created: latency-svc-m8ctg Jul 15 13:14:03.135: INFO: Got endpoints: latency-svc-m8ctg [796.143362ms] Jul 15 13:14:03.156: INFO: Created: latency-svc-9t7cz Jul 15 13:14:03.166: INFO: Got endpoints: latency-svc-9t7cz [778.193758ms] Jul 15 13:14:03.187: INFO: Created: latency-svc-h2bdk Jul 15 13:14:03.196: INFO: Got endpoints: latency-svc-h2bdk [766.595663ms] Jul 15 13:14:03.216: INFO: Created: latency-svc-6plck Jul 15 13:14:03.227: INFO: Got endpoints: latency-svc-6plck [761.050671ms] Jul 15 13:14:03.272: INFO: Created: latency-svc-pv7wf Jul 15 13:14:03.279: INFO: Got endpoints: latency-svc-pv7wf [734.691186ms] Jul 15 13:14:03.304: INFO: Created: latency-svc-fb52f Jul 15 13:14:03.317: INFO: Got endpoints: latency-svc-fb52f [724.150664ms] Jul 15 13:14:03.348: INFO: Created: latency-svc-5x5cf Jul 15 13:14:03.371: INFO: Got endpoints: latency-svc-5x5cf [704.670643ms] Jul 15 13:14:03.433: INFO: Created: latency-svc-ckg6s Jul 15 13:14:03.443: INFO: Got endpoints: latency-svc-ckg6s [753.992954ms] Jul 15 13:14:03.495: INFO: Created: latency-svc-4dsmq Jul 15 13:14:03.504: INFO: Got endpoints: latency-svc-4dsmq [766.183743ms] Jul 15 13:14:03.559: INFO: Created: latency-svc-xfksg Jul 15 13:14:03.600: INFO: Got endpoints: latency-svc-xfksg [771.779894ms] Jul 15 13:14:03.625: INFO: Created: latency-svc-k84hp Jul 15 13:14:03.636: INFO: Got endpoints: latency-svc-k84hp [778.376027ms] Jul 15 13:14:03.657: INFO: Created: latency-svc-g9skg Jul 15 13:14:03.727: INFO: Got endpoints: latency-svc-g9skg [838.792978ms] Jul 15 13:14:03.774: INFO: Created: latency-svc-b472g Jul 15 13:14:03.793: INFO: Got endpoints: latency-svc-b472g [810.955355ms] Jul 15 13:14:03.810: INFO: Created: latency-svc-6755c Jul 15 13:14:03.876: INFO: Got endpoints: latency-svc-6755c [830.945523ms] Jul 15 13:14:03.903: INFO: Created: latency-svc-mhzqj Jul 15 13:14:03.925: INFO: Got endpoints: latency-svc-mhzqj [797.619311ms] Jul 15 13:14:03.960: INFO: Created: latency-svc-96vkg Jul 15 13:14:03.974: INFO: Got endpoints: latency-svc-96vkg [838.88926ms] Jul 15 13:14:04.026: INFO: Created: latency-svc-wvczj Jul 15 13:14:04.029: INFO: Got endpoints: latency-svc-wvczj [863.494704ms] Jul 15 13:14:04.056: INFO: Created: latency-svc-m46dc Jul 15 13:14:04.070: INFO: Got endpoints: latency-svc-m46dc [874.002969ms] Jul 15 13:14:04.090: INFO: Created: latency-svc-rbqgh Jul 15 13:14:04.100: INFO: Got endpoints: latency-svc-rbqgh [873.557926ms] Jul 15 13:14:04.120: INFO: Created: latency-svc-mdm47 Jul 15 13:14:04.163: INFO: Got endpoints: latency-svc-mdm47 [884.18148ms] Jul 15 13:14:04.168: INFO: Created: latency-svc-g79w5 Jul 15 13:14:04.191: INFO: Got endpoints: latency-svc-g79w5 [874.647582ms] Jul 15 13:14:04.212: INFO: Created: latency-svc-htdb8 Jul 15 13:14:04.221: INFO: Got endpoints: latency-svc-htdb8 [850.084378ms] Jul 15 13:14:04.242: INFO: Created: latency-svc-5wqmf Jul 15 13:14:04.251: INFO: Got endpoints: latency-svc-5wqmf [808.131235ms] Jul 15 13:14:04.295: INFO: Created: latency-svc-ktrdc Jul 15 13:14:04.298: INFO: Got endpoints: latency-svc-ktrdc [794.875842ms] Jul 15 13:14:04.317: INFO: Created: latency-svc-65jb7 Jul 15 13:14:04.330: INFO: Got endpoints: latency-svc-65jb7 [730.270452ms] Jul 15 13:14:04.348: INFO: Created: latency-svc-bwss4 Jul 15 13:14:04.360: INFO: Got endpoints: latency-svc-bwss4 [724.047923ms] Jul 15 13:14:04.380: INFO: Created: latency-svc-j4rc4 Jul 15 13:14:04.463: INFO: Got endpoints: latency-svc-j4rc4 [735.717175ms] Jul 15 13:14:04.465: INFO: Created: latency-svc-485q2 Jul 15 13:14:04.515: INFO: Got endpoints: latency-svc-485q2 [722.494598ms] Jul 15 13:14:04.619: INFO: Created: latency-svc-2ph2k Jul 15 13:14:04.622: INFO: Got endpoints: latency-svc-2ph2k [745.967522ms] Jul 15 13:14:04.798: INFO: Created: latency-svc-8lgwg Jul 15 13:14:04.801: INFO: Got endpoints: latency-svc-8lgwg [876.04868ms] Jul 15 13:14:04.873: INFO: Created: latency-svc-tk6pc Jul 15 13:14:04.978: INFO: Got endpoints: latency-svc-tk6pc [1.003992748s] Jul 15 13:14:05.001: INFO: Created: latency-svc-vtf5n Jul 15 13:14:05.052: INFO: Got endpoints: latency-svc-vtf5n [1.022490346s] Jul 15 13:14:05.202: INFO: Created: latency-svc-99w89 Jul 15 13:14:05.226: INFO: Got endpoints: latency-svc-99w89 [1.15566263s] Jul 15 13:14:05.314: INFO: Created: latency-svc-zrbws Jul 15 13:14:05.328: INFO: Got endpoints: latency-svc-zrbws [1.227727059s] Jul 15 13:14:05.571: INFO: Created: latency-svc-kf49l Jul 15 13:14:05.598: INFO: Got endpoints: latency-svc-kf49l [1.434510541s] Jul 15 13:14:05.805: INFO: Created: latency-svc-97vln Jul 15 13:14:05.972: INFO: Got endpoints: latency-svc-97vln [1.780712878s] Jul 15 13:14:05.994: INFO: Created: latency-svc-sgkd4 Jul 15 13:14:06.025: INFO: Got endpoints: latency-svc-sgkd4 [1.803468895s] Jul 15 13:14:06.194: INFO: Created: latency-svc-z9j69 Jul 15 13:14:06.198: INFO: Got endpoints: latency-svc-z9j69 [1.946999052s] Jul 15 13:14:06.276: INFO: Created: latency-svc-hb674 Jul 15 13:14:06.337: INFO: Got endpoints: latency-svc-hb674 [2.038676477s] Jul 15 13:14:06.366: INFO: Created: latency-svc-bdddv Jul 15 13:14:06.372: INFO: Got endpoints: latency-svc-bdddv [2.041786838s] Jul 15 13:14:06.394: INFO: Created: latency-svc-rnsbv Jul 15 13:14:06.409: INFO: Got endpoints: latency-svc-rnsbv [2.048075756s] Jul 15 13:14:06.432: INFO: Created: latency-svc-swwzz Jul 15 13:14:06.463: INFO: Got endpoints: latency-svc-swwzz [2.000123482s] Jul 15 13:14:06.480: INFO: Created: latency-svc-6nx8z Jul 15 13:14:06.493: INFO: Got endpoints: latency-svc-6nx8z [1.977778632s] Jul 15 13:14:06.538: INFO: Created: latency-svc-ccdxj Jul 15 13:14:06.560: INFO: Got endpoints: latency-svc-ccdxj [1.938194378s] Jul 15 13:14:06.622: INFO: Created: latency-svc-dt68w Jul 15 13:14:06.638: INFO: Got endpoints: latency-svc-dt68w [1.836567863s] Jul 15 13:14:06.658: INFO: Created: latency-svc-ptrhj Jul 15 13:14:06.668: INFO: Got endpoints: latency-svc-ptrhj [1.689861875s] Jul 15 13:14:06.688: INFO: Created: latency-svc-kbhrx Jul 15 13:14:06.705: INFO: Got endpoints: latency-svc-kbhrx [1.653196158s] Jul 15 13:14:06.756: INFO: Created: latency-svc-kr58b Jul 15 13:14:06.765: INFO: Got endpoints: latency-svc-kr58b [1.538751735s] Jul 15 13:14:06.786: INFO: Created: latency-svc-w9rtw Jul 15 13:14:06.801: INFO: Got endpoints: latency-svc-w9rtw [1.473064789s] Jul 15 13:14:06.822: INFO: Created: latency-svc-k7cch Jul 15 13:14:06.837: INFO: Got endpoints: latency-svc-k7cch [1.239049555s] Jul 15 13:14:06.856: INFO: Created: latency-svc-dgsrb Jul 15 13:14:06.894: INFO: Got endpoints: latency-svc-dgsrb [921.332173ms] Jul 15 13:14:06.897: INFO: Created: latency-svc-6gmv7 Jul 15 13:14:06.916: INFO: Got endpoints: latency-svc-6gmv7 [890.899564ms] Jul 15 13:14:06.934: INFO: Created: latency-svc-l6vpn Jul 15 13:14:06.952: INFO: Got endpoints: latency-svc-l6vpn [753.86021ms] Jul 15 13:14:06.952: INFO: Latencies: [110.783427ms 138.151994ms 192.561288ms 258.613103ms 275.275739ms 306.703639ms 385.878488ms 415.231954ms 442.538781ms 524.553687ms 545.12603ms 581.367925ms 617.661748ms 657.109244ms 659.002626ms 660.677501ms 687.674369ms 690.759901ms 699.757348ms 703.113365ms 704.670643ms 706.346451ms 706.564295ms 714.285731ms 718.578536ms 720.032263ms 722.494598ms 724.047923ms 724.150664ms 727.167849ms 729.765427ms 730.270452ms 732.189254ms 732.837849ms 734.691186ms 735.717175ms 739.238247ms 739.410903ms 740.310821ms 741.151199ms 744.632746ms 745.967522ms 753.86021ms 753.992954ms 754.990842ms 755.856003ms 757.689496ms 757.709493ms 759.393125ms 761.050671ms 763.238902ms 766.183743ms 766.595663ms 767.286123ms 769.662306ms 771.779894ms 773.689137ms 775.574401ms 778.193758ms 778.376027ms 785.049822ms 786.413646ms 786.741597ms 788.471536ms 788.810877ms 792.785554ms 793.040017ms 794.875842ms 795.701592ms 795.963644ms 796.143362ms 797.619311ms 807.430547ms 808.131235ms 810.955355ms 811.15245ms 811.425352ms 813.614376ms 814.093175ms 814.83316ms 823.246898ms 823.351047ms 824.326039ms 830.945523ms 832.050591ms 835.039498ms 838.792978ms 838.88926ms 840.382649ms 840.81099ms 841.786712ms 849.787533ms 850.052525ms 850.084378ms 854.095368ms 854.856971ms 863.494704ms 866.726571ms 866.850533ms 867.787398ms 868.119041ms 870.577132ms 873.557926ms 873.771688ms 874.002969ms 874.294538ms 874.39079ms 874.647582ms 875.018618ms 876.04868ms 877.185617ms 884.18148ms 885.596827ms 888.070527ms 890.899564ms 893.668411ms 894.8708ms 921.332173ms 922.990643ms 930.86192ms 942.925046ms 954.528197ms 959.954317ms 960.636692ms 963.271355ms 981.36247ms 987.37881ms 991.491985ms 1.00254107s 1.002673938s 1.002819957s 1.003733809s 1.003992748s 1.00680601s 1.011648684s 1.01313858s 1.020636162s 1.022490346s 1.031637071s 1.042041619s 1.04598502s 1.047313493s 1.057533661s 1.062058314s 1.070550621s 1.083658408s 1.114811943s 1.15566263s 1.162697219s 1.183108438s 1.218728163s 1.227727059s 1.239049555s 1.250516514s 1.281852143s 1.328730851s 1.337660132s 1.343191263s 1.368227002s 1.37190538s 1.397987005s 1.420801531s 1.434510541s 1.444952436s 1.450037499s 1.473064789s 1.474331637s 1.484934961s 1.488394294s 1.508622326s 1.50910867s 1.527393828s 1.538044772s 1.538751735s 1.584955339s 1.596980835s 1.652542193s 1.653196158s 1.664545808s 1.668699535s 1.680759768s 1.689861875s 1.780712878s 1.803468895s 1.831824499s 1.836567863s 1.881352481s 1.933394793s 1.938194378s 1.946999052s 1.977778632s 1.984136158s 1.991027309s 2.000123482s 2.002744022s 2.012559032s 2.015027344s 2.038676477s 2.041786838s 2.048075756s] Jul 15 13:14:06.953: INFO: 50 %ile: 868.119041ms Jul 15 13:14:06.953: INFO: 90 %ile: 1.680759768s Jul 15 13:14:06.953: INFO: 99 %ile: 2.041786838s Jul 15 13:14:06.953: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:14:06.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8410" for this suite. Jul 15 13:14:31.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:14:31.140: INFO: namespace svc-latency-8410 deletion completed in 24.095353387s • [SLOW TEST:42.265 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:14:31.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:14:31.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d" in namespace "projected-9350" to be "success or failure" Jul 15 13:14:31.254: INFO: Pod "downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.486552ms Jul 15 13:14:33.258: INFO: Pod "downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030451442s Jul 15 13:14:35.262: INFO: Pod "downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03453391s STEP: Saw pod success Jul 15 13:14:35.262: INFO: Pod "downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d" satisfied condition "success or failure" Jul 15 13:14:35.264: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d container client-container: STEP: delete the pod Jul 15 13:14:35.299: INFO: Waiting for pod downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d to disappear Jul 15 13:14:35.309: INFO: Pod downwardapi-volume-58fde3b6-d600-4e79-9d31-463f3c9ef82d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:14:35.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9350" for this suite. Jul 15 13:14:41.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:14:41.385: INFO: namespace projected-9350 deletion completed in 6.072540765s • [SLOW TEST:10.244 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:14:41.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-b18fd4f2-cae4-4c20-8ae1-e8569c93cb5e STEP: Creating secret with name s-test-opt-upd-d89b727c-35ee-412d-b5f3-98481e8c0790 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b18fd4f2-cae4-4c20-8ae1-e8569c93cb5e STEP: Updating secret s-test-opt-upd-d89b727c-35ee-412d-b5f3-98481e8c0790 STEP: Creating secret with name s-test-opt-create-74f55fb5-fed2-4b6c-a3c5-60e1ae587bfb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:14:49.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5930" for this suite. Jul 15 13:15:11.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:15:11.825: INFO: namespace projected-5930 deletion completed in 22.111259469s • [SLOW TEST:30.439 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:15:11.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jul 15 13:15:15.940: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jul 15 13:15:31.046: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:15:31.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2953" for this suite. Jul 15 13:15:37.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:15:37.215: INFO: namespace pods-2953 deletion completed in 6.161277044s • [SLOW TEST:25.390 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:15:37.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2c03f77b-929b-458c-8f6d-0b911ece9e63 STEP: Creating a pod to test consume configMaps Jul 15 13:15:37.289: INFO: Waiting up to 5m0s for pod "pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf" in namespace "configmap-1986" to be "success or failure" Jul 15 13:15:37.319: INFO: Pod "pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf": Phase="Pending", Reason="", readiness=false. Elapsed: 30.327173ms Jul 15 13:15:39.324: INFO: Pod "pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034628682s Jul 15 13:15:41.328: INFO: Pod "pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039190859s STEP: Saw pod success Jul 15 13:15:41.328: INFO: Pod "pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf" satisfied condition "success or failure" Jul 15 13:15:41.332: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf container configmap-volume-test: STEP: delete the pod Jul 15 13:15:41.352: INFO: Waiting for pod pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf to disappear Jul 15 13:15:41.363: INFO: Pod pod-configmaps-16681a42-bc0a-4187-bb84-ae00d7d26daf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:15:41.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1986" for this suite. Jul 15 13:15:47.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:15:47.464: INFO: namespace configmap-1986 deletion completed in 6.097135675s • [SLOW TEST:10.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:15:47.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:15:47.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152" in namespace "projected-5311" to be "success or failure" Jul 15 13:15:47.556: INFO: Pod "downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152": Phase="Pending", Reason="", readiness=false. Elapsed: 3.886399ms Jul 15 13:15:49.560: INFO: Pod "downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007685791s Jul 15 13:15:51.685: INFO: Pod "downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133273842s STEP: Saw pod success Jul 15 13:15:51.685: INFO: Pod "downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152" satisfied condition "success or failure" Jul 15 13:15:51.687: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152 container client-container: STEP: delete the pod Jul 15 13:15:51.737: INFO: Waiting for pod downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152 to disappear Jul 15 13:15:51.747: INFO: Pod downwardapi-volume-6acd9ac1-5198-48a3-ad85-bf2c49ed0152 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:15:51.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5311" for this suite. Jul 15 13:15:57.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:15:57.844: INFO: namespace projected-5311 deletion completed in 6.09437228s • [SLOW TEST:10.380 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:15:57.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3843/configmap-test-58464524-6056-4800-9a17-37fb47186fbe STEP: Creating a pod to test consume configMaps Jul 15 13:15:57.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f" in namespace "configmap-3843" to be "success or failure" Jul 15 13:15:57.927: INFO: Pod "pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.112302ms Jul 15 13:15:59.932: INFO: Pod "pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011514377s Jul 15 13:16:01.936: INFO: Pod "pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015744318s STEP: Saw pod success Jul 15 13:16:01.936: INFO: Pod "pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f" satisfied condition "success or failure" Jul 15 13:16:01.939: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f container env-test: STEP: delete the pod Jul 15 13:16:02.127: INFO: Waiting for pod pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f to disappear Jul 15 13:16:02.135: INFO: Pod pod-configmaps-a4eaafc5-e0a7-4463-ba67-0391b13c0c1f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:16:02.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3843" for this suite. Jul 15 13:16:08.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:16:08.387: INFO: namespace configmap-3843 deletion completed in 6.248370904s • [SLOW TEST:10.542 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:16:08.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:17:08.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7879" for this suite. Jul 15 13:17:30.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:17:30.621: INFO: namespace container-probe-7879 deletion completed in 22.110877483s • [SLOW TEST:82.233 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:17:30.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-348aa3cc-4217-4645-926c-e419a80aecc4 STEP: Creating a pod to test consume secrets Jul 15 13:17:30.704: INFO: Waiting up to 5m0s for pod "pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a" in namespace "secrets-6199" to be "success or failure" Jul 15 13:17:30.713: INFO: Pod "pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.908353ms Jul 15 13:17:32.717: INFO: Pod "pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013049041s Jul 15 13:17:34.722: INFO: Pod "pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017641361s STEP: Saw pod success Jul 15 13:17:34.722: INFO: Pod "pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a" satisfied condition "success or failure" Jul 15 13:17:34.725: INFO: Trying to get logs from node iruya-worker pod pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a container secret-volume-test: STEP: delete the pod Jul 15 13:17:34.754: INFO: Waiting for pod pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a to disappear Jul 15 13:17:34.781: INFO: Pod pod-secrets-40257e0d-8d28-4852-aefa-55553fd9308a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:17:34.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6199" for this suite. Jul 15 13:17:40.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:17:40.870: INFO: namespace secrets-6199 deletion completed in 6.085497258s • [SLOW TEST:10.249 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:17:40.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jul 15 13:17:40.999: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 15 13:17:41.018: INFO: Waiting for terminating namespaces to be deleted... Jul 15 13:17:41.020: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jul 15 13:17:41.027: INFO: kindnet-452tn from kube-system started at 2020-07-10 10:24:50 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.027: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 13:17:41.027: INFO: live-test4-74f5c7c95f-l2676 from default started at 2020-07-10 11:02:03 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.027: INFO: Container live-test4 ready: false, restart count 1415 Jul 15 13:17:41.027: INFO: kube-proxy-2pg5m from kube-system started at 2020-07-10 10:24:49 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.027: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 13:17:41.027: INFO: dnsutils from default started at 2020-07-10 11:15:11 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.027: INFO: Container dnsutils ready: true, restart count 121 Jul 15 13:17:41.027: INFO: live-test7-5dd99f9b45-jtpmp from default started at 2020-07-10 11:54:47 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.027: INFO: Container live-test7 ready: false, restart count 1399 Jul 15 13:17:41.027: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jul 15 13:17:41.035: INFO: kube-proxy-bf52l from kube-system started at 2020-07-10 10:24:49 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 13:17:41.035: INFO: live-test2-54d9dcd87-bsdvc from default started at 2020-07-10 10:58:02 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container live-test2 ready: false, restart count 1417 Jul 15 13:17:41.035: INFO: live-test8-55669b464c-bfdv5 from default started at 2020-07-10 11:56:07 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container live-test8 ready: false, restart count 1402 Jul 15 13:17:41.035: INFO: kindnet-qpkmc from kube-system started at 2020-07-10 10:24:50 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 13:17:41.035: INFO: live-test3-6556bf7d77-2k9dg from default started at 2020-07-10 11:00:05 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container live-test3 ready: false, restart count 1413 Jul 15 13:17:41.035: INFO: live-test6-988dbb567-rqc7x from default started at 2020-07-10 11:22:41 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container live-test6 ready: false, restart count 1413 Jul 15 13:17:41.035: INFO: live-test1-677ffc8869-nvdk5 from default started at 2020-07-10 10:49:37 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container live-test1 ready: false, restart count 1416 Jul 15 13:17:41.035: INFO: live-test5-b6fcb7757-w869x from default started at 2020-07-10 11:06:28 +0000 UTC (1 container statuses recorded) Jul 15 13:17:41.035: INFO: Container live-test5 ready: false, restart count 1412 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod dnsutils requesting resource cpu=0m on Node iruya-worker Jul 15 13:17:41.133: INFO: Pod live-test1-677ffc8869-nvdk5 requesting resource cpu=0m on Node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod live-test2-54d9dcd87-bsdvc requesting resource cpu=0m on Node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod live-test3-6556bf7d77-2k9dg requesting resource cpu=0m on Node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod live-test4-74f5c7c95f-l2676 requesting resource cpu=0m on Node iruya-worker Jul 15 13:17:41.133: INFO: Pod live-test5-b6fcb7757-w869x requesting resource cpu=0m on Node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod live-test6-988dbb567-rqc7x requesting resource cpu=0m on Node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod live-test7-5dd99f9b45-jtpmp requesting resource cpu=0m on Node iruya-worker Jul 15 13:17:41.133: INFO: Pod live-test8-55669b464c-bfdv5 requesting resource cpu=0m on Node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod kindnet-452tn requesting resource cpu=100m on Node iruya-worker Jul 15 13:17:41.133: INFO: Pod kindnet-qpkmc requesting resource cpu=100m on Node iruya-worker2 Jul 15 13:17:41.133: INFO: Pod kube-proxy-2pg5m requesting resource cpu=0m on Node iruya-worker Jul 15 13:17:41.133: INFO: Pod kube-proxy-bf52l requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2b886686-5c6e-40ac-96e0-b9ced27351b3.1621ef7c86833430], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7795/filler-pod-2b886686-5c6e-40ac-96e0-b9ced27351b3 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2b886686-5c6e-40ac-96e0-b9ced27351b3.1621ef7d183a7b6f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2b886686-5c6e-40ac-96e0-b9ced27351b3.1621ef7d4e59d779], Reason = [Created], Message = [Created container filler-pod-2b886686-5c6e-40ac-96e0-b9ced27351b3] STEP: Considering event: Type = [Normal], Name = [filler-pod-2b886686-5c6e-40ac-96e0-b9ced27351b3.1621ef7d5c3db8d8], Reason = [Started], Message = [Started container filler-pod-2b886686-5c6e-40ac-96e0-b9ced27351b3] STEP: Considering event: Type = [Normal], Name = [filler-pod-84d6e5de-c7de-4aa9-a0dc-f4499c80e251.1621ef7c8542403e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7795/filler-pod-84d6e5de-c7de-4aa9-a0dc-f4499c80e251 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-84d6e5de-c7de-4aa9-a0dc-f4499c80e251.1621ef7cd01c515a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-84d6e5de-c7de-4aa9-a0dc-f4499c80e251.1621ef7d297eb2ca], Reason = [Created], Message = [Created container filler-pod-84d6e5de-c7de-4aa9-a0dc-f4499c80e251] STEP: Considering event: Type = [Normal], Name = [filler-pod-84d6e5de-c7de-4aa9-a0dc-f4499c80e251.1621ef7d3cc1a21a], Reason = [Started], Message = [Started container filler-pod-84d6e5de-c7de-4aa9-a0dc-f4499c80e251] STEP: Considering event: Type = [Warning], Name = [additional-pod.1621ef7ded4f8ae9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:17:48.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7795" for this suite. Jul 15 13:17:56.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:17:56.365: INFO: namespace sched-pred-7795 deletion completed in 8.082365606s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.494 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:17:56.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jul 15 13:17:56.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2944' Jul 15 13:17:56.639: INFO: stderr: "" Jul 15 13:17:56.639: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 15 13:17:57.644: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:17:57.644: INFO: Found 0 / 1 Jul 15 13:17:58.644: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:17:58.644: INFO: Found 0 / 1 Jul 15 13:17:59.643: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:17:59.643: INFO: Found 0 / 1 Jul 15 13:18:00.669: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:18:00.669: INFO: Found 1 / 1 Jul 15 13:18:00.669: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 15 13:18:00.672: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:18:00.672: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 15 13:18:00.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-ndz9s --namespace=kubectl-2944 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 15 13:18:00.807: INFO: stderr: "" Jul 15 13:18:00.807: INFO: stdout: "pod/redis-master-ndz9s patched\n" STEP: checking annotations Jul 15 13:18:00.810: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:18:00.811: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:18:00.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2944" for this suite. Jul 15 13:18:22.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:18:22.913: INFO: namespace kubectl-2944 deletion completed in 22.098985547s • [SLOW TEST:26.548 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:18:22.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-034a4fc0-e7af-4686-ac9a-42d195b19b6e STEP: Creating a pod to test consume configMaps Jul 15 13:18:22.970: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a" in namespace "projected-5247" to be "success or failure" Jul 15 13:18:22.984: INFO: Pod "pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.520245ms Jul 15 13:18:24.988: INFO: Pod "pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017240972s Jul 15 13:18:26.992: INFO: Pod "pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021370893s STEP: Saw pod success Jul 15 13:18:26.992: INFO: Pod "pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a" satisfied condition "success or failure" Jul 15 13:18:26.995: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a container projected-configmap-volume-test: STEP: delete the pod Jul 15 13:18:27.113: INFO: Waiting for pod pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a to disappear Jul 15 13:18:27.133: INFO: Pod pod-projected-configmaps-cb41a9b7-bd5a-4eef-9a16-ee134240e82a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:18:27.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5247" for this suite. Jul 15 13:18:33.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:18:33.339: INFO: namespace projected-5247 deletion completed in 6.203378152s • [SLOW TEST:10.427 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:18:33.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ee2cf283-7be3-43aa-b078-57a2bd55903a STEP: Creating a pod to test consume configMaps Jul 15 13:18:33.421: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd" in namespace "projected-1771" to be "success or failure" Jul 15 13:18:33.427: INFO: Pod "pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06324ms Jul 15 13:18:35.430: INFO: Pod "pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00898861s Jul 15 13:18:37.471: INFO: Pod "pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050275183s STEP: Saw pod success Jul 15 13:18:37.471: INFO: Pod "pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd" satisfied condition "success or failure" Jul 15 13:18:37.474: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd container projected-configmap-volume-test: STEP: delete the pod Jul 15 13:18:37.494: INFO: Waiting for pod pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd to disappear Jul 15 13:18:37.498: INFO: Pod pod-projected-configmaps-bc8abe87-6e58-4625-b18c-dbcc4017a1fd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:18:37.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1771" for this suite. Jul 15 13:18:43.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:18:43.600: INFO: namespace projected-1771 deletion completed in 6.096254475s • [SLOW TEST:10.260 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:18:43.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:18:43.684: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 15 13:18:43.696: INFO: Number of nodes with available pods: 0 Jul 15 13:18:43.696: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 15 13:18:43.787: INFO: Number of nodes with available pods: 0 Jul 15 13:18:43.787: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:44.791: INFO: Number of nodes with available pods: 0 Jul 15 13:18:44.791: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:45.792: INFO: Number of nodes with available pods: 0 Jul 15 13:18:45.792: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:46.792: INFO: Number of nodes with available pods: 0 Jul 15 13:18:46.792: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:47.791: INFO: Number of nodes with available pods: 1 Jul 15 13:18:47.791: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 15 13:18:47.824: INFO: Number of nodes with available pods: 1 Jul 15 13:18:47.824: INFO: Number of running nodes: 0, number of available pods: 1 Jul 15 13:18:48.828: INFO: Number of nodes with available pods: 0 Jul 15 13:18:48.828: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 15 13:18:48.841: INFO: Number of nodes with available pods: 0 Jul 15 13:18:48.841: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:49.845: INFO: Number of nodes with available pods: 0 Jul 15 13:18:49.845: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:50.845: INFO: Number of nodes with available pods: 0 Jul 15 13:18:50.845: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:51.845: INFO: Number of nodes with available pods: 0 Jul 15 13:18:51.845: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:52.845: INFO: Number of nodes with available pods: 0 Jul 15 13:18:52.845: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:53.845: INFO: Number of nodes with available pods: 0 Jul 15 13:18:53.845: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:54.844: INFO: Number of nodes with available pods: 0 Jul 15 13:18:54.844: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:55.846: INFO: Number of nodes with available pods: 0 Jul 15 13:18:55.846: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:56.859: INFO: Number of nodes with available pods: 0 Jul 15 13:18:56.859: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:57.861: INFO: Number of nodes with available pods: 0 Jul 15 13:18:57.861: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:58.844: INFO: Number of nodes with available pods: 0 Jul 15 13:18:58.844: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:18:59.844: INFO: Number of nodes with available pods: 0 Jul 15 13:18:59.844: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:19:00.845: INFO: Number of nodes with available pods: 1 Jul 15 13:19:00.845: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5012, will wait for the garbage collector to delete the pods Jul 15 13:19:00.909: INFO: Deleting DaemonSet.extensions daemon-set took: 5.075734ms Jul 15 13:19:01.209: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.221266ms Jul 15 13:19:06.912: INFO: Number of nodes with available pods: 0 Jul 15 13:19:06.912: INFO: Number of running nodes: 0, number of available pods: 0 Jul 15 13:19:06.914: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5012/daemonsets","resourceVersion":"1023584"},"items":null} Jul 15 13:19:06.915: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5012/pods","resourceVersion":"1023584"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:19:06.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5012" for this suite. Jul 15 13:19:12.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:19:13.049: INFO: namespace daemonsets-5012 deletion completed in 6.09301319s • [SLOW TEST:29.449 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:19:13.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 15 13:19:13.151: INFO: Waiting up to 5m0s for pod "pod-dcee4cea-ecbb-4295-a557-8170f5cbf816" in namespace "emptydir-2617" to be "success or failure" Jul 15 13:19:13.158: INFO: Pod "pod-dcee4cea-ecbb-4295-a557-8170f5cbf816": Phase="Pending", Reason="", readiness=false. Elapsed: 7.5374ms Jul 15 13:19:15.162: INFO: Pod "pod-dcee4cea-ecbb-4295-a557-8170f5cbf816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011390577s Jul 15 13:19:17.166: INFO: Pod "pod-dcee4cea-ecbb-4295-a557-8170f5cbf816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015604574s STEP: Saw pod success Jul 15 13:19:17.166: INFO: Pod "pod-dcee4cea-ecbb-4295-a557-8170f5cbf816" satisfied condition "success or failure" Jul 15 13:19:17.169: INFO: Trying to get logs from node iruya-worker pod pod-dcee4cea-ecbb-4295-a557-8170f5cbf816 container test-container: STEP: delete the pod Jul 15 13:19:17.302: INFO: Waiting for pod pod-dcee4cea-ecbb-4295-a557-8170f5cbf816 to disappear Jul 15 13:19:17.350: INFO: Pod pod-dcee4cea-ecbb-4295-a557-8170f5cbf816 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:19:17.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2617" for this suite. Jul 15 13:19:23.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:19:23.545: INFO: namespace emptydir-2617 deletion completed in 6.190384448s • [SLOW TEST:10.495 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:19:23.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:19:23.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 15 13:19:23.818: INFO: stderr: "" Jul 15 13:19:23.818: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:54:28Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:31:02Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:19:23.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2391" for this suite. Jul 15 13:19:29.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:19:29.914: INFO: namespace kubectl-2391 deletion completed in 6.091154842s • [SLOW TEST:6.369 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:19:29.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:19:29.964: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 15 13:19:32.033: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:19:33.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3641" for this suite. Jul 15 13:19:39.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:19:39.230: INFO: namespace replication-controller-3641 deletion completed in 6.166998811s • [SLOW TEST:9.315 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:19:39.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jul 15 13:19:39.340: INFO: Waiting up to 5m0s for pod "var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6" in namespace "var-expansion-2707" to be "success or failure" Jul 15 13:19:39.344: INFO: Pod "var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.707788ms Jul 15 13:19:41.348: INFO: Pod "var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007280703s Jul 15 13:19:43.352: INFO: Pod "var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011462147s STEP: Saw pod success Jul 15 13:19:43.352: INFO: Pod "var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6" satisfied condition "success or failure" Jul 15 13:19:43.356: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6 container dapi-container: STEP: delete the pod Jul 15 13:19:43.399: INFO: Waiting for pod var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6 to disappear Jul 15 13:19:43.410: INFO: Pod var-expansion-02e86b9c-c4a3-4e53-974c-0505f5f02ac6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:19:43.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2707" for this suite. Jul 15 13:19:49.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:19:49.507: INFO: namespace var-expansion-2707 deletion completed in 6.090433509s • [SLOW TEST:10.276 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:19:49.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9b73fde4-3c15-4671-876e-c2c436d29adc STEP: Creating configMap with name cm-test-opt-upd-6d84ab7c-6d94-4462-8a7e-65240a984137 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9b73fde4-3c15-4671-876e-c2c436d29adc STEP: Updating configmap cm-test-opt-upd-6d84ab7c-6d94-4462-8a7e-65240a984137 STEP: Creating configMap with name cm-test-opt-create-e7524554-7aff-4799-996a-2225d188e6ce STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:19:59.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5333" for this suite. Jul 15 13:20:21.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:20:21.796: INFO: namespace projected-5333 deletion completed in 22.099514147s • [SLOW TEST:32.290 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:20:21.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 15 13:20:28.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-de6719b5-b629-4457-b2bd-b1908b0bcacd -c busybox-main-container --namespace=emptydir-9372 -- cat /usr/share/volumeshare/shareddata.txt' Jul 15 13:20:28.212: INFO: stderr: "I0715 13:20:28.135913 1199 log.go:172] (0xc0001166e0) (0xc0002d6a00) Create stream\nI0715 13:20:28.135970 1199 log.go:172] (0xc0001166e0) (0xc0002d6a00) Stream added, broadcasting: 1\nI0715 13:20:28.139981 1199 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0715 13:20:28.140044 1199 log.go:172] (0xc0001166e0) (0xc0002d6000) Create stream\nI0715 13:20:28.140071 1199 log.go:172] (0xc0001166e0) (0xc0002d6000) Stream added, broadcasting: 3\nI0715 13:20:28.141289 1199 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0715 13:20:28.141334 1199 log.go:172] (0xc0001166e0) (0xc0005683c0) Create stream\nI0715 13:20:28.141346 1199 log.go:172] (0xc0001166e0) (0xc0005683c0) Stream added, broadcasting: 5\nI0715 13:20:28.142390 1199 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0715 13:20:28.205230 1199 log.go:172] (0xc0001166e0) Data frame received for 3\nI0715 13:20:28.205255 1199 log.go:172] (0xc0002d6000) (3) Data frame handling\nI0715 13:20:28.205279 1199 log.go:172] (0xc0001166e0) Data frame received for 5\nI0715 13:20:28.205320 1199 log.go:172] (0xc0005683c0) (5) Data frame handling\nI0715 13:20:28.205345 1199 log.go:172] (0xc0002d6000) (3) Data frame sent\nI0715 13:20:28.205358 1199 log.go:172] (0xc0001166e0) Data frame received for 3\nI0715 13:20:28.205374 1199 log.go:172] (0xc0002d6000) (3) Data frame handling\nI0715 13:20:28.207413 1199 log.go:172] (0xc0001166e0) Data frame received for 1\nI0715 13:20:28.207449 1199 log.go:172] (0xc0002d6a00) (1) Data frame handling\nI0715 13:20:28.207469 1199 log.go:172] (0xc0002d6a00) (1) Data frame sent\nI0715 13:20:28.207487 1199 log.go:172] (0xc0001166e0) (0xc0002d6a00) Stream removed, broadcasting: 1\nI0715 13:20:28.207508 1199 log.go:172] (0xc0001166e0) Go away received\nI0715 13:20:28.208006 1199 log.go:172] (0xc0001166e0) (0xc0002d6a00) Stream removed, broadcasting: 1\nI0715 13:20:28.208049 1199 log.go:172] (0xc0001166e0) (0xc0002d6000) Stream removed, broadcasting: 3\nI0715 13:20:28.208072 1199 log.go:172] (0xc0001166e0) (0xc0005683c0) Stream removed, broadcasting: 5\n" Jul 15 13:20:28.212: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:20:28.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9372" for this suite. Jul 15 13:20:34.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:20:34.312: INFO: namespace emptydir-9372 deletion completed in 6.09457131s • [SLOW TEST:12.515 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:20:34.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:20:38.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6239" for this suite. Jul 15 13:21:28.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:21:28.557: INFO: namespace kubelet-test-6239 deletion completed in 50.091981588s • [SLOW TEST:54.244 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:21:28.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 15 13:21:28.667: INFO: Waiting up to 5m0s for pod "pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45" in namespace "emptydir-4955" to be "success or failure" Jul 15 13:21:28.689: INFO: Pod "pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45": Phase="Pending", Reason="", readiness=false. Elapsed: 21.720479ms Jul 15 13:21:30.694: INFO: Pod "pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027565366s Jul 15 13:21:32.698: INFO: Pod "pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031447704s STEP: Saw pod success Jul 15 13:21:32.698: INFO: Pod "pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45" satisfied condition "success or failure" Jul 15 13:21:32.701: INFO: Trying to get logs from node iruya-worker2 pod pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45 container test-container: STEP: delete the pod Jul 15 13:21:32.773: INFO: Waiting for pod pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45 to disappear Jul 15 13:21:32.778: INFO: Pod pod-22a3eaeb-8dc3-465c-b223-22d97e7f1d45 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:21:32.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4955" for this suite. Jul 15 13:21:38.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:21:38.871: INFO: namespace emptydir-4955 deletion completed in 6.090508272s • [SLOW TEST:10.314 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:21:38.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-z96j STEP: Creating a pod to test atomic-volume-subpath Jul 15 13:21:38.966: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z96j" in namespace "subpath-5744" to be "success or failure" Jul 15 13:21:38.982: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Pending", Reason="", readiness=false. Elapsed: 16.399815ms Jul 15 13:21:40.986: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020052043s Jul 15 13:21:42.990: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 4.024015503s Jul 15 13:21:44.995: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 6.028644129s Jul 15 13:21:46.999: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 8.032951665s Jul 15 13:21:49.003: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 10.03731405s Jul 15 13:21:51.007: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 12.041084012s Jul 15 13:21:53.011: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 14.044967248s Jul 15 13:21:55.016: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 16.050369768s Jul 15 13:21:57.021: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 18.054796768s Jul 15 13:21:59.025: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 20.059350831s Jul 15 13:22:01.030: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Running", Reason="", readiness=true. Elapsed: 22.063713199s Jul 15 13:22:03.034: INFO: Pod "pod-subpath-test-downwardapi-z96j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068319078s STEP: Saw pod success Jul 15 13:22:03.034: INFO: Pod "pod-subpath-test-downwardapi-z96j" satisfied condition "success or failure" Jul 15 13:22:03.038: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-z96j container test-container-subpath-downwardapi-z96j: STEP: delete the pod Jul 15 13:22:03.067: INFO: Waiting for pod pod-subpath-test-downwardapi-z96j to disappear Jul 15 13:22:03.071: INFO: Pod pod-subpath-test-downwardapi-z96j no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-z96j Jul 15 13:22:03.071: INFO: Deleting pod "pod-subpath-test-downwardapi-z96j" in namespace "subpath-5744" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:22:03.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5744" for this suite. Jul 15 13:22:09.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:22:09.195: INFO: namespace subpath-5744 deletion completed in 6.095803262s • [SLOW TEST:30.323 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:22:09.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:22:40.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8981" for this suite. Jul 15 13:22:46.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:22:46.373: INFO: namespace container-runtime-8981 deletion completed in 6.109923649s • [SLOW TEST:37.176 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:22:46.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jul 15 13:22:50.986: INFO: Successfully updated pod "annotationupdatebbbfac44-5db0-4aab-aa66-c2d668604239" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:22:55.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5144" for this suite. Jul 15 13:23:17.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:23:17.151: INFO: namespace downward-api-5144 deletion completed in 22.137828732s • [SLOW TEST:30.778 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:23:17.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:23:17.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687" in namespace "downward-api-597" to be "success or failure" Jul 15 13:23:17.234: INFO: Pod "downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687": Phase="Pending", Reason="", readiness=false. Elapsed: 3.52722ms Jul 15 13:23:19.238: INFO: Pod "downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007633451s Jul 15 13:23:21.242: INFO: Pod "downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011748601s STEP: Saw pod success Jul 15 13:23:21.242: INFO: Pod "downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687" satisfied condition "success or failure" Jul 15 13:23:21.247: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687 container client-container: STEP: delete the pod Jul 15 13:23:21.286: INFO: Waiting for pod downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687 to disappear Jul 15 13:23:21.294: INFO: Pod downwardapi-volume-723eb065-490d-4bce-87de-68f78501d687 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:23:21.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-597" for this suite. Jul 15 13:23:27.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:23:27.385: INFO: namespace downward-api-597 deletion completed in 6.087472034s • [SLOW TEST:10.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:23:27.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 15 13:23:27.504: INFO: Waiting up to 5m0s for pod "pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8" in namespace "emptydir-8931" to be "success or failure" Jul 15 13:23:27.518: INFO: Pod "pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.453127ms Jul 15 13:23:29.522: INFO: Pod "pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017511121s Jul 15 13:23:31.570: INFO: Pod "pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065625815s STEP: Saw pod success Jul 15 13:23:31.570: INFO: Pod "pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8" satisfied condition "success or failure" Jul 15 13:23:31.573: INFO: Trying to get logs from node iruya-worker2 pod pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8 container test-container: STEP: delete the pod Jul 15 13:23:31.614: INFO: Waiting for pod pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8 to disappear Jul 15 13:23:31.631: INFO: Pod pod-bb3ea21a-c44b-4020-aaac-d0a07cf78ed8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:23:31.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8931" for this suite. Jul 15 13:23:37.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:23:37.742: INFO: namespace emptydir-8931 deletion completed in 6.107779357s • [SLOW TEST:10.357 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:23:37.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:23:37.820: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:23:41.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5246" for this suite. Jul 15 13:24:31.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:24:32.038: INFO: namespace pods-5246 deletion completed in 50.135824707s • [SLOW TEST:54.295 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:24:32.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0715 13:25:02.682920 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 15 13:25:02.682: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:25:02.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9192" for this suite. Jul 15 13:25:10.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:25:10.755: INFO: namespace gc-9192 deletion completed in 8.070238825s • [SLOW TEST:38.717 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:25:10.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:25:10.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217" in namespace "projected-2155" to be "success or failure" Jul 15 13:25:10.864: INFO: Pod "downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217": Phase="Pending", Reason="", readiness=false. Elapsed: 29.999557ms Jul 15 13:25:13.946: INFO: Pod "downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112106905s Jul 15 13:25:15.950: INFO: Pod "downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217": Phase="Pending", Reason="", readiness=false. Elapsed: 5.11657666s Jul 15 13:25:17.954: INFO: Pod "downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.120408502s STEP: Saw pod success Jul 15 13:25:17.954: INFO: Pod "downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217" satisfied condition "success or failure" Jul 15 13:25:17.957: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217 container client-container: STEP: delete the pod Jul 15 13:25:18.009: INFO: Waiting for pod downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217 to disappear Jul 15 13:25:18.011: INFO: Pod downwardapi-volume-77f2769c-6d35-4eca-bc0e-4f06c6152217 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:25:18.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2155" for this suite. Jul 15 13:25:24.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:25:24.116: INFO: namespace projected-2155 deletion completed in 6.101535186s • [SLOW TEST:13.360 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:25:24.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 15 13:25:24.249: INFO: Waiting up to 5m0s for pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b" in namespace "emptydir-4052" to be "success or failure" Jul 15 13:25:24.476: INFO: Pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 227.174014ms Jul 15 13:25:26.480: INFO: Pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230944862s Jul 15 13:25:28.484: INFO: Pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234874411s Jul 15 13:25:32.206: INFO: Pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.957591422s Jul 15 13:25:34.210: INFO: Pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.961033156s Jul 15 13:25:36.214: INFO: Pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.965416525s STEP: Saw pod success Jul 15 13:25:36.214: INFO: Pod "pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b" satisfied condition "success or failure" Jul 15 13:25:36.217: INFO: Trying to get logs from node iruya-worker pod pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b container test-container: STEP: delete the pod Jul 15 13:25:37.124: INFO: Waiting for pod pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b to disappear Jul 15 13:25:37.152: INFO: Pod pod-03004ea1-ed2e-4ab7-8656-3185c6074c7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:25:37.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4052" for this suite. Jul 15 13:25:43.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:25:43.255: INFO: namespace emptydir-4052 deletion completed in 6.099829132s • [SLOW TEST:19.138 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:25:43.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-bae8c7fe-371f-48bb-8c5b-11acfddd6d4a STEP: Creating a pod to test consume secrets Jul 15 13:25:43.363: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9" in namespace "projected-7652" to be "success or failure" Jul 15 13:25:43.368: INFO: Pod "pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.095864ms Jul 15 13:25:45.493: INFO: Pod "pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130560206s Jul 15 13:25:47.497: INFO: Pod "pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134618954s STEP: Saw pod success Jul 15 13:25:47.497: INFO: Pod "pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9" satisfied condition "success or failure" Jul 15 13:25:47.500: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9 container projected-secret-volume-test: STEP: delete the pod Jul 15 13:25:47.543: INFO: Waiting for pod pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9 to disappear Jul 15 13:25:47.548: INFO: Pod pod-projected-secrets-8af3103d-ddac-4cc6-844b-dbd2e984f0b9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:25:47.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7652" for this suite. Jul 15 13:25:53.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:25:53.647: INFO: namespace projected-7652 deletion completed in 6.096393023s • [SLOW TEST:10.392 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:25:53.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jul 15 13:26:00.250: INFO: Successfully updated pod "labelsupdatea9c7c048-f5ff-4d93-ac92-8d6010a7be5d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:26:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1220" for this suite. Jul 15 13:26:24.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:26:24.392: INFO: namespace projected-1220 deletion completed in 22.09039566s • [SLOW TEST:30.745 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:26:24.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-9c47d838-933c-4909-87f8-9e7103cb0198 STEP: Creating a pod to test consume secrets Jul 15 13:26:24.496: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079" in namespace "projected-4584" to be "success or failure" Jul 15 13:26:24.501: INFO: Pod "pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079": Phase="Pending", Reason="", readiness=false. Elapsed: 5.420759ms Jul 15 13:26:26.505: INFO: Pod "pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008899729s Jul 15 13:26:28.511: INFO: Pod "pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015140433s STEP: Saw pod success Jul 15 13:26:28.511: INFO: Pod "pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079" satisfied condition "success or failure" Jul 15 13:26:28.513: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079 container secret-volume-test: STEP: delete the pod Jul 15 13:26:28.547: INFO: Waiting for pod pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079 to disappear Jul 15 13:26:28.561: INFO: Pod pod-projected-secrets-7e65a081-6f66-480f-923e-b145704a5079 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:26:28.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4584" for this suite. Jul 15 13:26:34.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:26:34.694: INFO: namespace projected-4584 deletion completed in 6.130738121s • [SLOW TEST:10.301 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:26:34.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jul 15 13:26:34.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6426' Jul 15 13:26:37.818: INFO: stderr: "" Jul 15 13:26:37.818: INFO: stdout: "pod/pause created\n" Jul 15 13:26:37.818: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 15 13:26:37.818: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6426" to be "running and ready" Jul 15 13:26:37.840: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.664144ms Jul 15 13:26:39.844: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025465796s Jul 15 13:26:41.848: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.029339499s Jul 15 13:26:41.848: INFO: Pod "pause" satisfied condition "running and ready" Jul 15 13:26:41.848: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jul 15 13:26:41.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6426' Jul 15 13:26:41.945: INFO: stderr: "" Jul 15 13:26:41.945: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 15 13:26:41.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6426' Jul 15 13:26:42.044: INFO: stderr: "" Jul 15 13:26:42.044: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 15 13:26:42.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6426' Jul 15 13:26:42.128: INFO: stderr: "" Jul 15 13:26:42.128: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 15 13:26:42.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6426' Jul 15 13:26:42.212: INFO: stderr: "" Jul 15 13:26:42.212: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jul 15 13:26:42.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6426' Jul 15 13:26:42.334: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:26:42.334: INFO: stdout: "pod \"pause\" force deleted\n" Jul 15 13:26:42.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6426' Jul 15 13:26:42.414: INFO: stderr: "No resources found.\n" Jul 15 13:26:42.414: INFO: stdout: "" Jul 15 13:26:42.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6426 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 15 13:26:42.521: INFO: stderr: "" Jul 15 13:26:42.521: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:26:42.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6426" for this suite. Jul 15 13:26:48.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:26:48.925: INFO: namespace kubectl-6426 deletion completed in 6.125526726s • [SLOW TEST:14.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:26:48.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jul 15 13:26:49.011: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:26:54.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2358" for this suite. Jul 15 13:27:00.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:27:00.314: INFO: namespace init-container-2358 deletion completed in 6.101585924s • [SLOW TEST:11.389 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:27:00.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 15 13:27:00.494: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:27:00.496: INFO: Number of nodes with available pods: 0 Jul 15 13:27:00.496: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:27:01.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:27:01.504: INFO: Number of nodes with available pods: 0 Jul 15 13:27:01.504: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:27:02.561: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:27:02.564: INFO: Number of nodes with available pods: 0 Jul 15 13:27:02.564: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:27:03.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:27:03.505: INFO: Number of nodes with available pods: 0 Jul 15 13:27:03.505: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:27:04.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:27:04.504: INFO: Number of nodes with available pods: 0 Jul 15 13:27:04.504: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:27:05.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:27:05.505: INFO: Number of nodes with available pods: 2 Jul 15 13:27:05.505: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 15 13:27:05.524: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:27:05.529: INFO: Number of nodes with available pods: 2 Jul 15 13:27:05.529: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9168, will wait for the garbage collector to delete the pods Jul 15 13:27:06.773: INFO: Deleting DaemonSet.extensions daemon-set took: 7.168766ms Jul 15 13:27:07.073: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.283916ms Jul 15 13:27:16.876: INFO: Number of nodes with available pods: 0 Jul 15 13:27:16.876: INFO: Number of running nodes: 0, number of available pods: 0 Jul 15 13:27:16.879: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9168/daemonsets","resourceVersion":"1025377"},"items":null} Jul 15 13:27:16.881: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9168/pods","resourceVersion":"1025377"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:27:16.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9168" for this suite. Jul 15 13:27:22.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:27:22.981: INFO: namespace daemonsets-9168 deletion completed in 6.0896081s • [SLOW TEST:22.667 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:27:22.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:27:23.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6820' Jul 15 13:27:23.335: INFO: stderr: "" Jul 15 13:27:23.335: INFO: stdout: "replicationcontroller/redis-master created\n" Jul 15 13:27:23.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6820' Jul 15 13:27:23.680: INFO: stderr: "" Jul 15 13:27:23.680: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jul 15 13:27:24.686: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:27:24.686: INFO: Found 0 / 1 Jul 15 13:27:25.684: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:27:25.684: INFO: Found 0 / 1 Jul 15 13:27:26.684: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:27:26.684: INFO: Found 0 / 1 Jul 15 13:27:27.685: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:27:27.685: INFO: Found 1 / 1 Jul 15 13:27:27.685: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 15 13:27:27.688: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:27:27.688: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 15 13:27:27.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-vh67g --namespace=kubectl-6820' Jul 15 13:27:27.797: INFO: stderr: "" Jul 15 13:27:27.797: INFO: stdout: "Name: redis-master-vh67g\nNamespace: kubectl-6820\nPriority: 0\nNode: iruya-worker2/172.18.0.5\nStart Time: Wed, 15 Jul 2020 13:27:23 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.157\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://09cb29405f5c78681a908550bebc69da863079349c0b35f868f4d06b75bf0556\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 15 Jul 2020 13:27:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-d8zss (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-d8zss:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-d8zss\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6820/redis-master-vh67g to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Jul 15 13:27:27.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6820' Jul 15 13:27:27.931: INFO: stderr: "" Jul 15 13:27:27.931: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6820\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-vh67g\n" Jul 15 13:27:27.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6820' Jul 15 13:27:28.025: INFO: stderr: "" Jul 15 13:27:28.025: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6820\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.117.54\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.157:6379\nSession Affinity: None\nEvents: \n" Jul 15 13:27:28.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jul 15 13:27:28.147: INFO: stderr: "" Jul 15 13:27:28.148: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:24:15 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 15 Jul 2020 13:27:03 +0000 Fri, 10 Jul 2020 10:24:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 15 Jul 2020 13:27:03 +0000 Fri, 10 Jul 2020 10:24:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 15 Jul 2020 13:27:03 +0000 Fri, 10 Jul 2020 10:24:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 15 Jul 2020 13:27:03 +0000 Fri, 10 Jul 2020 10:24:55 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: ddfe0bc7a0894b3cb09cc705d7a30756\n System UUID: 21288804-52c5-446b-b3b1-ea01a8604f98\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.15.11\n Kube-Proxy Version: v1.15.11\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-m2fxl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d3h\n kube-system coredns-5d4dd4b4db-vp5mq 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d3h\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kindnet-9ltkw 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d3h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kube-proxy-fj88n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\n local-path-storage local-path-provisioner-668779bd7-7vrm2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d3h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 15 13:27:28.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6820' Jul 15 13:27:28.252: INFO: stderr: "" Jul 15 13:27:28.252: INFO: stdout: "Name: kubectl-6820\nLabels: e2e-framework=kubectl\n e2e-run=ef314d12-e6fb-4f53-a950-ab1d6803a998\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:27:28.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6820" for this suite. Jul 15 13:27:50.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:27:50.343: INFO: namespace kubectl-6820 deletion completed in 22.087910208s • [SLOW TEST:27.362 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:27:50.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:27:50.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819" in namespace "projected-2621" to be "success or failure" Jul 15 13:27:50.452: INFO: Pod "downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819": Phase="Pending", Reason="", readiness=false. Elapsed: 38.9594ms Jul 15 13:27:52.456: INFO: Pod "downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043305019s Jul 15 13:27:54.460: INFO: Pod "downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047547031s STEP: Saw pod success Jul 15 13:27:54.460: INFO: Pod "downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819" satisfied condition "success or failure" Jul 15 13:27:54.463: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819 container client-container: STEP: delete the pod Jul 15 13:27:54.486: INFO: Waiting for pod downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819 to disappear Jul 15 13:27:54.490: INFO: Pod downwardapi-volume-77b64956-d19d-424f-9b84-f3c87b52b819 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:27:54.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2621" for this suite. Jul 15 13:28:00.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:28:00.619: INFO: namespace projected-2621 deletion completed in 6.126254636s • [SLOW TEST:10.276 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:28:00.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-584 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-584 STEP: Deleting pre-stop pod Jul 15 13:28:13.714: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:28:13.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-584" for this suite. Jul 15 13:28:51.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:28:51.843: INFO: namespace prestop-584 deletion completed in 38.118465587s • [SLOW TEST:51.223 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:28:51.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-bzrp STEP: Creating a pod to test atomic-volume-subpath Jul 15 13:28:51.931: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bzrp" in namespace "subpath-6307" to be "success or failure" Jul 15 13:28:51.934: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.815693ms Jul 15 13:28:53.968: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03717806s Jul 15 13:28:55.972: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 4.041490129s Jul 15 13:28:57.976: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 6.045409757s Jul 15 13:28:59.980: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 8.049474333s Jul 15 13:29:01.998: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 10.067079043s Jul 15 13:29:04.001: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 12.070897906s Jul 15 13:29:06.005: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 14.074651102s Jul 15 13:29:08.009: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 16.078187859s Jul 15 13:29:10.012: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 18.081735591s Jul 15 13:29:12.017: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 20.086713188s Jul 15 13:29:14.021: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Running", Reason="", readiness=true. Elapsed: 22.090484305s Jul 15 13:29:16.025: INFO: Pod "pod-subpath-test-configmap-bzrp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09474s STEP: Saw pod success Jul 15 13:29:16.025: INFO: Pod "pod-subpath-test-configmap-bzrp" satisfied condition "success or failure" Jul 15 13:29:16.028: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-bzrp container test-container-subpath-configmap-bzrp: STEP: delete the pod Jul 15 13:29:16.048: INFO: Waiting for pod pod-subpath-test-configmap-bzrp to disappear Jul 15 13:29:16.059: INFO: Pod pod-subpath-test-configmap-bzrp no longer exists STEP: Deleting pod pod-subpath-test-configmap-bzrp Jul 15 13:29:16.059: INFO: Deleting pod "pod-subpath-test-configmap-bzrp" in namespace "subpath-6307" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:29:16.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6307" for this suite. Jul 15 13:29:22.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:29:22.193: INFO: namespace subpath-6307 deletion completed in 6.128860209s • [SLOW TEST:30.350 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:29:22.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-0a11ca09-f612-4420-a859-643ad9e560bb STEP: Creating a pod to test consume configMaps Jul 15 13:29:22.273: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277" in namespace "projected-5285" to be "success or failure" Jul 15 13:29:22.275: INFO: Pod "pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474294ms Jul 15 13:29:24.279: INFO: Pod "pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006377851s Jul 15 13:29:26.283: INFO: Pod "pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010297535s STEP: Saw pod success Jul 15 13:29:26.283: INFO: Pod "pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277" satisfied condition "success or failure" Jul 15 13:29:26.286: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277 container projected-configmap-volume-test: STEP: delete the pod Jul 15 13:29:26.325: INFO: Waiting for pod pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277 to disappear Jul 15 13:29:26.342: INFO: Pod pod-projected-configmaps-0e5e1f22-80bc-4ae4-a03f-1d5848c92277 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:29:26.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5285" for this suite. Jul 15 13:29:32.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:29:32.462: INFO: namespace projected-5285 deletion completed in 6.117047448s • [SLOW TEST:10.269 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:29:32.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 15 13:29:32.572: INFO: Waiting up to 5m0s for pod "pod-fd518100-b30a-49fd-9572-2e8d63c6064f" in namespace "emptydir-6433" to be "success or failure" Jul 15 13:29:32.600: INFO: Pod "pod-fd518100-b30a-49fd-9572-2e8d63c6064f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.400136ms Jul 15 13:29:34.605: INFO: Pod "pod-fd518100-b30a-49fd-9572-2e8d63c6064f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032706544s Jul 15 13:29:36.609: INFO: Pod "pod-fd518100-b30a-49fd-9572-2e8d63c6064f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036951472s STEP: Saw pod success Jul 15 13:29:36.609: INFO: Pod "pod-fd518100-b30a-49fd-9572-2e8d63c6064f" satisfied condition "success or failure" Jul 15 13:29:36.612: INFO: Trying to get logs from node iruya-worker2 pod pod-fd518100-b30a-49fd-9572-2e8d63c6064f container test-container: STEP: delete the pod Jul 15 13:29:36.631: INFO: Waiting for pod pod-fd518100-b30a-49fd-9572-2e8d63c6064f to disappear Jul 15 13:29:36.636: INFO: Pod pod-fd518100-b30a-49fd-9572-2e8d63c6064f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:29:36.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6433" for this suite. Jul 15 13:29:42.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:29:42.727: INFO: namespace emptydir-6433 deletion completed in 6.088388187s • [SLOW TEST:10.265 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:29:42.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-98m9n in namespace proxy-9799 I0715 13:29:42.859524 6 runners.go:180] Created replication controller with name: proxy-service-98m9n, namespace: proxy-9799, replica count: 1 I0715 13:29:43.909955 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 13:29:44.910167 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 13:29:45.910399 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 13:29:46.910607 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:47.910843 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:48.911084 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:49.911315 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:50.911547 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:51.911790 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:52.912042 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:53.912315 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:54.912577 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 13:29:55.912907 6 runners.go:180] proxy-service-98m9n Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 13:29:55.916: INFO: setup took 13.116041492s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 15 13:29:55.923: INFO: (0) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 7.216573ms) Jul 15 13:29:55.923: INFO: (0) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 7.138458ms) Jul 15 13:29:55.923: INFO: (0) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 7.265329ms) Jul 15 13:29:55.923: INFO: (0) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 7.622975ms) Jul 15 13:29:55.923: INFO: (0) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 7.73221ms) Jul 15 13:29:55.924: INFO: (0) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 7.858407ms) Jul 15 13:29:55.924: INFO: (0) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 7.687891ms) Jul 15 13:29:55.925: INFO: (0) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 9.409426ms) Jul 15 13:29:55.928: INFO: (0) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 12.608486ms) Jul 15 13:29:55.928: INFO: (0) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 12.693472ms) Jul 15 13:29:55.937: INFO: (0) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 21.573288ms) Jul 15 13:29:55.938: INFO: (0) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 22.006332ms) Jul 15 13:29:55.938: INFO: (0) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 22.287744ms) Jul 15 13:29:55.939: INFO: (0) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test<... (200; 4.23245ms) Jul 15 13:29:55.945: INFO: (1) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.187991ms) Jul 15 13:29:55.945: INFO: (1) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test (200; 4.381535ms) Jul 15 13:29:55.945: INFO: (1) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 4.322874ms) Jul 15 13:29:55.945: INFO: (1) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.376851ms) Jul 15 13:29:55.945: INFO: (1) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 4.401014ms) Jul 15 13:29:55.945: INFO: (1) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.384697ms) Jul 15 13:29:55.946: INFO: (1) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.960267ms) Jul 15 13:29:55.946: INFO: (1) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.836583ms) Jul 15 13:29:55.948: INFO: (2) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 2.257302ms) Jul 15 13:29:55.949: INFO: (2) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 2.668687ms) Jul 15 13:29:55.949: INFO: (2) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.42063ms) Jul 15 13:29:55.950: INFO: (2) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.078847ms) Jul 15 13:29:55.950: INFO: (2) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.128156ms) Jul 15 13:29:55.950: INFO: (2) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 4.549194ms) Jul 15 13:29:55.950: INFO: (2) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.566649ms) Jul 15 13:29:55.950: INFO: (2) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 4.570572ms) Jul 15 13:29:55.951: INFO: (2) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test (200; 4.642377ms) Jul 15 13:29:55.951: INFO: (2) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.671455ms) Jul 15 13:29:55.951: INFO: (2) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.711277ms) Jul 15 13:29:55.951: INFO: (2) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 4.709942ms) Jul 15 13:29:55.951: INFO: (2) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.9546ms) Jul 15 13:29:55.951: INFO: (2) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 5.133663ms) Jul 15 13:29:55.955: INFO: (3) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 4.2839ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.446457ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 4.818736ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.668082ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.805205ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 4.773713ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 4.84543ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test (200; 4.899039ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.902307ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.965214ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 5.021019ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 4.932314ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 5.006121ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 5.020267ms) Jul 15 13:29:55.956: INFO: (3) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 5.013584ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.255863ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.751067ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.814129ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.722866ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 3.858587ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.711707ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 3.860406ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 3.843961ms) Jul 15 13:29:55.960: INFO: (4) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test (200; 2.656296ms) Jul 15 13:29:55.964: INFO: (5) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 2.947754ms) Jul 15 13:29:55.964: INFO: (5) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.067667ms) Jul 15 13:29:55.967: INFO: (5) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 5.461669ms) Jul 15 13:29:55.967: INFO: (5) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 5.525903ms) Jul 15 13:29:55.967: INFO: (5) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 6.183283ms) Jul 15 13:29:55.967: INFO: (5) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 6.388622ms) Jul 15 13:29:55.967: INFO: (5) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 6.2912ms) Jul 15 13:29:55.968: INFO: (5) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 7.052718ms) Jul 15 13:29:55.968: INFO: (5) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 6.965816ms) Jul 15 13:29:55.968: INFO: (5) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 6.988995ms) Jul 15 13:29:55.968: INFO: (5) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 7.113063ms) Jul 15 13:29:55.968: INFO: (5) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 7.216919ms) Jul 15 13:29:55.969: INFO: (5) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test (200; 3.383952ms) Jul 15 13:29:55.973: INFO: (6) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 3.645971ms) Jul 15 13:29:55.973: INFO: (6) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 3.87269ms) Jul 15 13:29:55.973: INFO: (6) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: ... (200; 4.460458ms) Jul 15 13:29:55.974: INFO: (6) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.930406ms) Jul 15 13:29:55.975: INFO: (6) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 5.578945ms) Jul 15 13:29:55.975: INFO: (6) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 5.573155ms) Jul 15 13:29:55.975: INFO: (6) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 5.649864ms) Jul 15 13:29:55.975: INFO: (6) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 5.686206ms) Jul 15 13:29:55.975: INFO: (6) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 5.708913ms) Jul 15 13:29:55.975: INFO: (6) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 5.759575ms) Jul 15 13:29:55.978: INFO: (7) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test<... (200; 3.142ms) Jul 15 13:29:55.978: INFO: (7) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.186476ms) Jul 15 13:29:55.978: INFO: (7) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.209404ms) Jul 15 13:29:55.978: INFO: (7) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.237122ms) Jul 15 13:29:55.978: INFO: (7) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.339359ms) Jul 15 13:29:55.978: INFO: (7) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 3.572604ms) Jul 15 13:29:55.978: INFO: (7) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.482132ms) Jul 15 13:29:55.979: INFO: (7) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 3.979329ms) Jul 15 13:29:55.979: INFO: (7) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 3.956574ms) Jul 15 13:29:55.979: INFO: (7) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.010891ms) Jul 15 13:29:55.979: INFO: (7) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.045852ms) Jul 15 13:29:55.979: INFO: (7) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.10759ms) Jul 15 13:29:55.979: INFO: (7) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.053831ms) Jul 15 13:29:55.982: INFO: (8) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 2.792892ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.691794ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.741435ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.843493ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 3.895296ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.938706ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 3.959307ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 3.947177ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.038845ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.087621ms) Jul 15 13:29:55.983: INFO: (8) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.033322ms) Jul 15 13:29:55.984: INFO: (8) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 4.51544ms) Jul 15 13:29:55.984: INFO: (8) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.496989ms) Jul 15 13:29:55.984: INFO: (8) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test<... (200; 2.857761ms) Jul 15 13:29:55.987: INFO: (9) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.090194ms) Jul 15 13:29:55.987: INFO: (9) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 3.171769ms) Jul 15 13:29:55.987: INFO: (9) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.191957ms) Jul 15 13:29:55.987: INFO: (9) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.049356ms) Jul 15 13:29:55.987: INFO: (9) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.219503ms) Jul 15 13:29:55.987: INFO: (9) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 3.094733ms) Jul 15 13:29:55.987: INFO: (9) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.131021ms) Jul 15 13:29:55.988: INFO: (9) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.233916ms) Jul 15 13:29:55.988: INFO: (9) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.225689ms) Jul 15 13:29:55.988: INFO: (9) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.227092ms) Jul 15 13:29:55.988: INFO: (9) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.28843ms) Jul 15 13:29:55.988: INFO: (9) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.319063ms) Jul 15 13:29:55.991: INFO: (10) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 2.21909ms) Jul 15 13:29:55.991: INFO: (10) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 2.602888ms) Jul 15 13:29:55.991: INFO: (10) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test (200; 2.978496ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 3.310769ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 3.21061ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.228301ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 3.345461ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.307799ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 3.503015ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.624743ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 3.748694ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 3.685885ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 3.793532ms) Jul 15 13:29:55.992: INFO: (10) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 3.894523ms) Jul 15 13:29:55.995: INFO: (11) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 2.90647ms) Jul 15 13:29:55.996: INFO: (11) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 3.679477ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 4.129461ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.359389ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.425169ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.469401ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 4.463689ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.440117ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.401713ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 4.494563ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.492602ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.493683ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 4.529658ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 4.497609ms) Jul 15 13:29:55.997: INFO: (11) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test (200; 3.171937ms) Jul 15 13:29:56.000: INFO: (12) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 3.124727ms) Jul 15 13:29:56.000: INFO: (12) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 3.10156ms) Jul 15 13:29:56.000: INFO: (12) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.433721ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 3.477833ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.475437ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.914896ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.053728ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 4.111958ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.184847ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.332521ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 4.333382ms) Jul 15 13:29:56.001: INFO: (12) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.370433ms) Jul 15 13:29:56.002: INFO: (12) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.419923ms) Jul 15 13:29:56.004: INFO: (13) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test<... (200; 2.887708ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.220778ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 2.495421ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 2.832446ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 2.698624ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 2.624226ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.029012ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.240754ms) Jul 15 13:29:56.005: INFO: (13) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 3.457596ms) Jul 15 13:29:56.006: INFO: (13) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.240796ms) Jul 15 13:29:56.009: INFO: (13) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 6.647459ms) Jul 15 13:29:56.009: INFO: (13) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 7.792635ms) Jul 15 13:29:56.009: INFO: (13) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 6.949626ms) Jul 15 13:29:56.009: INFO: (13) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 7.559585ms) Jul 15 13:29:56.009: INFO: (13) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 6.804631ms) Jul 15 13:29:56.013: INFO: (14) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test<... (200; 3.46793ms) Jul 15 13:29:56.013: INFO: (14) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.478668ms) Jul 15 13:29:56.013: INFO: (14) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.496696ms) Jul 15 13:29:56.013: INFO: (14) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.615217ms) Jul 15 13:29:56.013: INFO: (14) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.630673ms) Jul 15 13:29:56.013: INFO: (14) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 3.600484ms) Jul 15 13:29:56.013: INFO: (14) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.690061ms) Jul 15 13:29:56.014: INFO: (14) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 4.498362ms) Jul 15 13:29:56.014: INFO: (14) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.758318ms) Jul 15 13:29:56.014: INFO: (14) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.779191ms) Jul 15 13:29:56.014: INFO: (14) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.841768ms) Jul 15 13:29:56.014: INFO: (14) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.895904ms) Jul 15 13:29:56.014: INFO: (14) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 4.843639ms) Jul 15 13:29:56.015: INFO: (14) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.909009ms) Jul 15 13:29:56.017: INFO: (15) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 2.455326ms) Jul 15 13:29:56.018: INFO: (15) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 3.302881ms) Jul 15 13:29:56.018: INFO: (15) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.614336ms) Jul 15 13:29:56.018: INFO: (15) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.509853ms) Jul 15 13:29:56.018: INFO: (15) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: ... (200; 4.091199ms) Jul 15 13:29:56.019: INFO: (15) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.674955ms) Jul 15 13:29:56.019: INFO: (15) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.010329ms) Jul 15 13:29:56.019: INFO: (15) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.520826ms) Jul 15 13:29:56.019: INFO: (15) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 3.952857ms) Jul 15 13:29:56.019: INFO: (15) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.308906ms) Jul 15 13:29:56.021: INFO: (15) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 6.543561ms) Jul 15 13:29:56.023: INFO: (16) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: ... (200; 4.009193ms) Jul 15 13:29:56.025: INFO: (16) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 4.056458ms) Jul 15 13:29:56.025: INFO: (16) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.121933ms) Jul 15 13:29:56.025: INFO: (16) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 4.101607ms) Jul 15 13:29:56.025: INFO: (16) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 4.067188ms) Jul 15 13:29:56.025: INFO: (16) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.189751ms) Jul 15 13:29:56.025: INFO: (16) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 4.246799ms) Jul 15 13:29:56.026: INFO: (16) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.416398ms) Jul 15 13:29:56.026: INFO: (16) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 4.369763ms) Jul 15 13:29:56.026: INFO: (16) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 4.694537ms) Jul 15 13:29:56.026: INFO: (16) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 4.683569ms) Jul 15 13:29:56.028: INFO: (17) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 1.777631ms) Jul 15 13:29:56.028: INFO: (17) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 2.124937ms) Jul 15 13:29:56.028: INFO: (17) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: test<... (200; 3.74152ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.788443ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.953069ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname2/proxy/: tls qux (200; 3.992645ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/services/https:proxy-service-98m9n:tlsportname1/proxy/: tls baz (200; 4.161062ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 4.150939ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname1/proxy/: foo (200; 4.279328ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 4.287012ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.239121ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 4.338604ms) Jul 15 13:29:56.030: INFO: (17) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname1/proxy/: foo (200; 4.492802ms) Jul 15 13:29:56.031: INFO: (17) /api/v1/namespaces/proxy-9799/services/proxy-service-98m9n:portname2/proxy/: bar (200; 5.428624ms) Jul 15 13:29:56.031: INFO: (17) /api/v1/namespaces/proxy-9799/services/http:proxy-service-98m9n:portname2/proxy/: bar (200; 5.457163ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.546436ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.552117ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.561497ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 3.575789ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 3.781214ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 3.801856ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 3.834953ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 3.799041ms) Jul 15 13:29:56.035: INFO: (18) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:1080/proxy/: ... (200; 3.858114ms) Jul 15 13:29:56.036: INFO: (18) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: ... (200; 2.07042ms) Jul 15 13:29:56.041: INFO: (19) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:1080/proxy/: test<... (200; 2.981613ms) Jul 15 13:29:56.041: INFO: (19) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 3.401088ms) Jul 15 13:29:56.042: INFO: (19) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z/proxy/: test (200; 4.745529ms) Jul 15 13:29:56.043: INFO: (19) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.627081ms) Jul 15 13:29:56.043: INFO: (19) /api/v1/namespaces/proxy-9799/pods/proxy-service-98m9n-tsh7z:162/proxy/: bar (200; 4.792294ms) Jul 15 13:29:56.043: INFO: (19) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:462/proxy/: tls qux (200; 4.898032ms) Jul 15 13:29:56.043: INFO: (19) /api/v1/namespaces/proxy-9799/pods/http:proxy-service-98m9n-tsh7z:160/proxy/: foo (200; 5.1606ms) Jul 15 13:29:56.043: INFO: (19) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:460/proxy/: tls baz (200; 5.042407ms) Jul 15 13:29:56.043: INFO: (19) /api/v1/namespaces/proxy-9799/pods/https:proxy-service-98m9n-tsh7z:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:30:13.069: INFO: Creating ReplicaSet my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135 Jul 15 13:30:13.085: INFO: Pod name my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135: Found 0 pods out of 1 Jul 15 13:30:18.089: INFO: Pod name my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135: Found 1 pods out of 1 Jul 15 13:30:18.089: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135" is running Jul 15 13:30:18.092: INFO: Pod "my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135-qx9kg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 13:30:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 13:30:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 13:30:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 13:30:13 +0000 UTC Reason: Message:}]) Jul 15 13:30:18.093: INFO: Trying to dial the pod Jul 15 13:30:23.116: INFO: Controller my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135: Got expected result from replica 1 [my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135-qx9kg]: "my-hostname-basic-786f85c4-0688-4d7c-bd9f-cec08254c135-qx9kg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:30:23.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3996" for this suite. Jul 15 13:30:29.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:30:29.216: INFO: namespace replicaset-3996 deletion completed in 6.096211921s • [SLOW TEST:16.210 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:30:29.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 15 13:30:29.300: INFO: Waiting up to 5m0s for pod "pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc" in namespace "emptydir-892" to be "success or failure" Jul 15 13:30:29.307: INFO: Pod "pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.82393ms Jul 15 13:30:31.351: INFO: Pod "pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0519187s Jul 15 13:30:33.356: INFO: Pod "pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc": Phase="Running", Reason="", readiness=true. Elapsed: 4.056070744s Jul 15 13:30:35.360: INFO: Pod "pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060130367s STEP: Saw pod success Jul 15 13:30:35.360: INFO: Pod "pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc" satisfied condition "success or failure" Jul 15 13:30:35.363: INFO: Trying to get logs from node iruya-worker pod pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc container test-container: STEP: delete the pod Jul 15 13:30:35.431: INFO: Waiting for pod pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc to disappear Jul 15 13:30:35.450: INFO: Pod pod-0c80ec10-5565-4ae1-bfa0-832f5e2194fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:30:35.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-892" for this suite. Jul 15 13:30:41.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:30:41.555: INFO: namespace emptydir-892 deletion completed in 6.101325687s • [SLOW TEST:12.339 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:30:41.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jul 15 13:30:41.664: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:30:49.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5435" for this suite. Jul 15 13:31:11.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:31:11.566: INFO: namespace init-container-5435 deletion completed in 22.10157335s • [SLOW TEST:30.011 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:31:11.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:31:11.692: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 15 13:31:11.708: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:11.721: INFO: Number of nodes with available pods: 0 Jul 15 13:31:11.721: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:31:12.727: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:12.731: INFO: Number of nodes with available pods: 0 Jul 15 13:31:12.731: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:31:13.880: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:13.883: INFO: Number of nodes with available pods: 0 Jul 15 13:31:13.883: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:31:14.727: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:14.730: INFO: Number of nodes with available pods: 0 Jul 15 13:31:14.730: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:31:15.726: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:15.763: INFO: Number of nodes with available pods: 2 Jul 15 13:31:15.763: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 15 13:31:15.806: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:15.806: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:15.812: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:16.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:16.817: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:16.820: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:17.921: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:17.921: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:17.925: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:18.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:18.817: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:18.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:19.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:19.817: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:19.817: INFO: Pod daemon-set-vkv9h is not available Jul 15 13:31:19.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:20.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:20.817: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:20.817: INFO: Pod daemon-set-vkv9h is not available Jul 15 13:31:20.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:21.816: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:21.816: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:21.816: INFO: Pod daemon-set-vkv9h is not available Jul 15 13:31:21.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:22.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:22.817: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:22.817: INFO: Pod daemon-set-vkv9h is not available Jul 15 13:31:22.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:23.816: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:23.816: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:23.816: INFO: Pod daemon-set-vkv9h is not available Jul 15 13:31:23.820: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:24.816: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:24.817: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:24.817: INFO: Pod daemon-set-vkv9h is not available Jul 15 13:31:24.831: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:25.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:25.817: INFO: Wrong image for pod: daemon-set-vkv9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:25.817: INFO: Pod daemon-set-vkv9h is not available Jul 15 13:31:25.822: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:26.826: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:26.826: INFO: Pod daemon-set-j49sv is not available Jul 15 13:31:26.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:27.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:27.817: INFO: Pod daemon-set-j49sv is not available Jul 15 13:31:27.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:28.956: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:28.956: INFO: Pod daemon-set-j49sv is not available Jul 15 13:31:28.960: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:29.867: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:29.867: INFO: Pod daemon-set-j49sv is not available Jul 15 13:31:29.870: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:30.903: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:30.908: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:31.958: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:31.961: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:32.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:32.817: INFO: Pod daemon-set-569nx is not available Jul 15 13:31:32.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:33.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:33.817: INFO: Pod daemon-set-569nx is not available Jul 15 13:31:33.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:34.816: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:34.816: INFO: Pod daemon-set-569nx is not available Jul 15 13:31:34.824: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:35.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:35.817: INFO: Pod daemon-set-569nx is not available Jul 15 13:31:35.822: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:36.817: INFO: Wrong image for pod: daemon-set-569nx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 15 13:31:36.817: INFO: Pod daemon-set-569nx is not available Jul 15 13:31:36.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:37.817: INFO: Pod daemon-set-qcc2k is not available Jul 15 13:31:37.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 15 13:31:37.825: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:37.827: INFO: Number of nodes with available pods: 1 Jul 15 13:31:37.827: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:31:38.873: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:38.877: INFO: Number of nodes with available pods: 1 Jul 15 13:31:38.877: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:31:39.832: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:31:39.835: INFO: Number of nodes with available pods: 2 Jul 15 13:31:39.835: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8691, will wait for the garbage collector to delete the pods Jul 15 13:31:39.933: INFO: Deleting DaemonSet.extensions daemon-set took: 30.451913ms Jul 15 13:31:40.233: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.273378ms Jul 15 13:31:46.937: INFO: Number of nodes with available pods: 0 Jul 15 13:31:46.937: INFO: Number of running nodes: 0, number of available pods: 0 Jul 15 13:31:46.939: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8691/daemonsets","resourceVersion":"1026378"},"items":null} Jul 15 13:31:46.942: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8691/pods","resourceVersion":"1026378"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:31:46.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8691" for this suite. Jul 15 13:31:54.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:31:55.062: INFO: namespace daemonsets-8691 deletion completed in 8.105520736s • [SLOW TEST:43.496 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:31:55.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:31:59.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3620" for this suite. Jul 15 13:32:05.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:32:05.380: INFO: namespace emptydir-wrapper-3620 deletion completed in 6.088049216s • [SLOW TEST:10.317 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:32:05.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:32:05.436: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:32:09.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4657" for this suite. Jul 15 13:32:59.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:32:59.718: INFO: namespace pods-4657 deletion completed in 50.095872119s • [SLOW TEST:54.338 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:32:59.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:33:06.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1815" for this suite. Jul 15 13:33:12.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:33:12.130: INFO: namespace namespaces-1815 deletion completed in 6.103320007s STEP: Destroying namespace "nsdeletetest-3168" for this suite. Jul 15 13:33:12.133: INFO: Namespace nsdeletetest-3168 was already deleted STEP: Destroying namespace "nsdeletetest-9851" for this suite. Jul 15 13:33:18.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:33:18.221: INFO: namespace nsdeletetest-9851 deletion completed in 6.088382636s • [SLOW TEST:18.503 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:33:18.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jul 15 13:33:18.300: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:33:25.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8921" for this suite. Jul 15 13:33:31.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:33:31.890: INFO: namespace init-container-8921 deletion completed in 6.080946496s • [SLOW TEST:13.668 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:33:31.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-625b2644-bb21-42a6-84fc-cdd6d622cc5e STEP: Creating a pod to test consume secrets Jul 15 13:33:31.960: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820" in namespace "projected-1209" to be "success or failure" Jul 15 13:33:32.018: INFO: Pod "pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820": Phase="Pending", Reason="", readiness=false. Elapsed: 57.869298ms Jul 15 13:33:34.022: INFO: Pod "pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062194503s Jul 15 13:33:36.025: INFO: Pod "pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065110179s Jul 15 13:33:38.054: INFO: Pod "pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094217729s STEP: Saw pod success Jul 15 13:33:38.054: INFO: Pod "pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820" satisfied condition "success or failure" Jul 15 13:33:38.126: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820 container projected-secret-volume-test: STEP: delete the pod Jul 15 13:33:38.367: INFO: Waiting for pod pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820 to disappear Jul 15 13:33:38.384: INFO: Pod pod-projected-secrets-7cd5d524-f56f-45ce-8560-71c1a0705820 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:33:38.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1209" for this suite. Jul 15 13:33:44.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:33:44.469: INFO: namespace projected-1209 deletion completed in 6.081238386s • [SLOW TEST:12.578 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:33:44.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-378.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-378.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-378.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 25.95.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.95.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.95.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.95.25_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-378.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-378.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-378.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-378.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-378.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-378.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 25.95.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.95.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.95.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.95.25_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 13:33:50.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.633: INFO: Unable to read wheezy_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.636: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.639: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.662: INFO: Unable to read jessie_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.665: INFO: Unable to read jessie_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.668: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.671: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:50.688: INFO: Lookups using dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537 failed for: [wheezy_udp@dns-test-service.dns-378.svc.cluster.local wheezy_tcp@dns-test-service.dns-378.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_udp@dns-test-service.dns-378.svc.cluster.local jessie_tcp@dns-test-service.dns-378.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local] Jul 15 13:33:55.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.700: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.704: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.726: INFO: Unable to read jessie_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.729: INFO: Unable to read jessie_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.732: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.734: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:33:55.752: INFO: Lookups using dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537 failed for: [wheezy_udp@dns-test-service.dns-378.svc.cluster.local wheezy_tcp@dns-test-service.dns-378.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_udp@dns-test-service.dns-378.svc.cluster.local jessie_tcp@dns-test-service.dns-378.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local] Jul 15 13:34:00.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.700: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.703: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.726: INFO: Unable to read jessie_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.728: INFO: Unable to read jessie_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.731: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.734: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:00.750: INFO: Lookups using dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537 failed for: [wheezy_udp@dns-test-service.dns-378.svc.cluster.local wheezy_tcp@dns-test-service.dns-378.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_udp@dns-test-service.dns-378.svc.cluster.local jessie_tcp@dns-test-service.dns-378.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local] Jul 15 13:34:05.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.700: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.706: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.724: INFO: Unable to read jessie_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.730: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:05.752: INFO: Lookups using dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537 failed for: [wheezy_udp@dns-test-service.dns-378.svc.cluster.local wheezy_tcp@dns-test-service.dns-378.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_udp@dns-test-service.dns-378.svc.cluster.local jessie_tcp@dns-test-service.dns-378.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local] Jul 15 13:34:10.694: INFO: Unable to read wheezy_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.701: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.703: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.725: INFO: Unable to read jessie_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.730: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:10.752: INFO: Lookups using dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537 failed for: [wheezy_udp@dns-test-service.dns-378.svc.cluster.local wheezy_tcp@dns-test-service.dns-378.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_udp@dns-test-service.dns-378.svc.cluster.local jessie_tcp@dns-test-service.dns-378.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local] Jul 15 13:34:15.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.699: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.702: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.724: INFO: Unable to read jessie_udp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.728: INFO: Unable to read jessie_tcp@dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.731: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.735: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local from pod dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537: the server could not find the requested resource (get pods dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537) Jul 15 13:34:15.753: INFO: Lookups using dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537 failed for: [wheezy_udp@dns-test-service.dns-378.svc.cluster.local wheezy_tcp@dns-test-service.dns-378.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_udp@dns-test-service.dns-378.svc.cluster.local jessie_tcp@dns-test-service.dns-378.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-378.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-378.svc.cluster.local] Jul 15 13:34:20.762: INFO: DNS probes using dns-378/dns-test-1e2704ae-3125-4e91-898b-c70a6c18f537 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:34:21.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-378" for this suite. Jul 15 13:34:29.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:34:29.570: INFO: namespace dns-378 deletion completed in 8.123229197s • [SLOW TEST:45.101 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:34:29.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4891 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 15 13:34:29.643: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 15 13:34:57.733: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.64 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4891 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:34:57.733: INFO: >>> kubeConfig: /root/.kube/config I0715 13:34:57.762176 6 log.go:172] (0xc0014f02c0) (0xc000b9fa40) Create stream I0715 13:34:57.762203 6 log.go:172] (0xc0014f02c0) (0xc000b9fa40) Stream added, broadcasting: 1 I0715 13:34:57.764138 6 log.go:172] (0xc0014f02c0) Reply frame received for 1 I0715 13:34:57.764198 6 log.go:172] (0xc0014f02c0) (0xc002a940a0) Create stream I0715 13:34:57.764265 6 log.go:172] (0xc0014f02c0) (0xc002a940a0) Stream added, broadcasting: 3 I0715 13:34:57.765234 6 log.go:172] (0xc0014f02c0) Reply frame received for 3 I0715 13:34:57.765284 6 log.go:172] (0xc0014f02c0) (0xc000b9fae0) Create stream I0715 13:34:57.765296 6 log.go:172] (0xc0014f02c0) (0xc000b9fae0) Stream added, broadcasting: 5 I0715 13:34:57.766291 6 log.go:172] (0xc0014f02c0) Reply frame received for 5 I0715 13:34:58.829975 6 log.go:172] (0xc0014f02c0) Data frame received for 3 I0715 13:34:58.830002 6 log.go:172] (0xc002a940a0) (3) Data frame handling I0715 13:34:58.830014 6 log.go:172] (0xc002a940a0) (3) Data frame sent I0715 13:34:58.830022 6 log.go:172] (0xc0014f02c0) Data frame received for 3 I0715 13:34:58.830028 6 log.go:172] (0xc002a940a0) (3) Data frame handling I0715 13:34:58.830149 6 log.go:172] (0xc0014f02c0) Data frame received for 5 I0715 13:34:58.830167 6 log.go:172] (0xc000b9fae0) (5) Data frame handling I0715 13:34:58.832285 6 log.go:172] (0xc0014f02c0) Data frame received for 1 I0715 13:34:58.832307 6 log.go:172] (0xc000b9fa40) (1) Data frame handling I0715 13:34:58.832328 6 log.go:172] (0xc000b9fa40) (1) Data frame sent I0715 13:34:58.832346 6 log.go:172] (0xc0014f02c0) (0xc000b9fa40) Stream removed, broadcasting: 1 I0715 13:34:58.832365 6 log.go:172] (0xc0014f02c0) Go away received I0715 13:34:58.832469 6 log.go:172] (0xc0014f02c0) (0xc000b9fa40) Stream removed, broadcasting: 1 I0715 13:34:58.832496 6 log.go:172] (0xc0014f02c0) (0xc002a940a0) Stream removed, broadcasting: 3 I0715 13:34:58.832532 6 log.go:172] (0xc0014f02c0) (0xc000b9fae0) Stream removed, broadcasting: 5 Jul 15 13:34:58.832: INFO: Found all expected endpoints: [netserver-0] Jul 15 13:34:58.836: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.168 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4891 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:34:58.836: INFO: >>> kubeConfig: /root/.kube/config I0715 13:34:58.868645 6 log.go:172] (0xc0013a66e0) (0xc000a33b80) Create stream I0715 13:34:58.868675 6 log.go:172] (0xc0013a66e0) (0xc000a33b80) Stream added, broadcasting: 1 I0715 13:34:58.871553 6 log.go:172] (0xc0013a66e0) Reply frame received for 1 I0715 13:34:58.871594 6 log.go:172] (0xc0013a66e0) (0xc002a94140) Create stream I0715 13:34:58.871609 6 log.go:172] (0xc0013a66e0) (0xc002a94140) Stream added, broadcasting: 3 I0715 13:34:58.873254 6 log.go:172] (0xc0013a66e0) Reply frame received for 3 I0715 13:34:58.873305 6 log.go:172] (0xc0013a66e0) (0xc002a941e0) Create stream I0715 13:34:58.873322 6 log.go:172] (0xc0013a66e0) (0xc002a941e0) Stream added, broadcasting: 5 I0715 13:34:58.875352 6 log.go:172] (0xc0013a66e0) Reply frame received for 5 I0715 13:34:59.944627 6 log.go:172] (0xc0013a66e0) Data frame received for 3 I0715 13:34:59.944808 6 log.go:172] (0xc002a94140) (3) Data frame handling I0715 13:34:59.944860 6 log.go:172] (0xc002a94140) (3) Data frame sent I0715 13:34:59.945461 6 log.go:172] (0xc0013a66e0) Data frame received for 5 I0715 13:34:59.945490 6 log.go:172] (0xc002a941e0) (5) Data frame handling I0715 13:34:59.945519 6 log.go:172] (0xc0013a66e0) Data frame received for 3 I0715 13:34:59.945531 6 log.go:172] (0xc002a94140) (3) Data frame handling I0715 13:34:59.947482 6 log.go:172] (0xc0013a66e0) Data frame received for 1 I0715 13:34:59.947583 6 log.go:172] (0xc000a33b80) (1) Data frame handling I0715 13:34:59.947671 6 log.go:172] (0xc000a33b80) (1) Data frame sent I0715 13:34:59.947705 6 log.go:172] (0xc0013a66e0) (0xc000a33b80) Stream removed, broadcasting: 1 I0715 13:34:59.947728 6 log.go:172] (0xc0013a66e0) Go away received I0715 13:34:59.947806 6 log.go:172] (0xc0013a66e0) (0xc000a33b80) Stream removed, broadcasting: 1 I0715 13:34:59.947823 6 log.go:172] (0xc0013a66e0) (0xc002a94140) Stream removed, broadcasting: 3 I0715 13:34:59.947840 6 log.go:172] (0xc0013a66e0) (0xc002a941e0) Stream removed, broadcasting: 5 Jul 15 13:34:59.947: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:34:59.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4891" for this suite. Jul 15 13:35:21.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:35:22.086: INFO: namespace pod-network-test-4891 deletion completed in 22.135227444s • [SLOW TEST:52.516 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:35:22.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-987e0e80-caaa-4085-bc1e-aae20e03c426 STEP: Creating secret with name secret-projected-all-test-volume-9e6bb919-87e3-4ac6-bf7f-b8e92eda8a55 STEP: Creating a pod to test Check all projections for projected volume plugin Jul 15 13:35:22.159: INFO: Waiting up to 5m0s for pod "projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05" in namespace "projected-48" to be "success or failure" Jul 15 13:35:22.192: INFO: Pod "projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05": Phase="Pending", Reason="", readiness=false. Elapsed: 33.25986ms Jul 15 13:35:24.240: INFO: Pod "projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081360932s Jul 15 13:35:26.243: INFO: Pod "projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084059727s STEP: Saw pod success Jul 15 13:35:26.243: INFO: Pod "projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05" satisfied condition "success or failure" Jul 15 13:35:26.245: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05 container projected-all-volume-test: STEP: delete the pod Jul 15 13:35:26.467: INFO: Waiting for pod projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05 to disappear Jul 15 13:35:26.554: INFO: Pod projected-volume-d116cc66-758b-4af8-b11c-50c41ac15a05 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:35:26.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-48" for this suite. Jul 15 13:35:32.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:35:32.687: INFO: namespace projected-48 deletion completed in 6.128972159s • [SLOW TEST:10.601 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:35:32.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 15 13:35:40.029: INFO: 10 pods remaining Jul 15 13:35:40.029: INFO: 0 pods has nil DeletionTimestamp Jul 15 13:35:40.029: INFO: Jul 15 13:35:40.768: INFO: 0 pods remaining Jul 15 13:35:40.768: INFO: 0 pods has nil DeletionTimestamp Jul 15 13:35:40.768: INFO: Jul 15 13:35:41.428: INFO: 0 pods remaining Jul 15 13:35:41.428: INFO: 0 pods has nil DeletionTimestamp Jul 15 13:35:41.428: INFO: STEP: Gathering metrics W0715 13:35:42.451269 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 15 13:35:42.451: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:35:42.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3696" for this suite. Jul 15 13:35:48.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:35:48.762: INFO: namespace gc-3696 deletion completed in 6.308224728s • [SLOW TEST:16.075 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:35:48.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3467.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3467.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3467.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3467.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3467.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3467.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 13:35:54.901: INFO: DNS probes using dns-3467/dns-test-9f24aae4-56e3-4d42-a5ca-2d58452b1aea succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:35:54.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3467" for this suite. Jul 15 13:36:00.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:36:01.075: INFO: namespace dns-3467 deletion completed in 6.128544264s • [SLOW TEST:12.312 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:36:01.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 15 13:36:11.227: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.227: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.264157 6 log.go:172] (0xc001b74a50) (0xc002c04d20) Create stream I0715 13:36:11.264187 6 log.go:172] (0xc001b74a50) (0xc002c04d20) Stream added, broadcasting: 1 I0715 13:36:11.271764 6 log.go:172] (0xc001b74a50) Reply frame received for 1 I0715 13:36:11.271821 6 log.go:172] (0xc001b74a50) (0xc0018b8000) Create stream I0715 13:36:11.271833 6 log.go:172] (0xc001b74a50) (0xc0018b8000) Stream added, broadcasting: 3 I0715 13:36:11.272822 6 log.go:172] (0xc001b74a50) Reply frame received for 3 I0715 13:36:11.272860 6 log.go:172] (0xc001b74a50) (0xc003115a40) Create stream I0715 13:36:11.272874 6 log.go:172] (0xc001b74a50) (0xc003115a40) Stream added, broadcasting: 5 I0715 13:36:11.274102 6 log.go:172] (0xc001b74a50) Reply frame received for 5 I0715 13:36:11.347993 6 log.go:172] (0xc001b74a50) Data frame received for 5 I0715 13:36:11.348024 6 log.go:172] (0xc003115a40) (5) Data frame handling I0715 13:36:11.348051 6 log.go:172] (0xc001b74a50) Data frame received for 3 I0715 13:36:11.348076 6 log.go:172] (0xc0018b8000) (3) Data frame handling I0715 13:36:11.348098 6 log.go:172] (0xc0018b8000) (3) Data frame sent I0715 13:36:11.348116 6 log.go:172] (0xc001b74a50) Data frame received for 3 I0715 13:36:11.348129 6 log.go:172] (0xc0018b8000) (3) Data frame handling I0715 13:36:11.349848 6 log.go:172] (0xc001b74a50) Data frame received for 1 I0715 13:36:11.349872 6 log.go:172] (0xc002c04d20) (1) Data frame handling I0715 13:36:11.349912 6 log.go:172] (0xc002c04d20) (1) Data frame sent I0715 13:36:11.349960 6 log.go:172] (0xc001b74a50) (0xc002c04d20) Stream removed, broadcasting: 1 I0715 13:36:11.350008 6 log.go:172] (0xc001b74a50) Go away received I0715 13:36:11.350120 6 log.go:172] (0xc001b74a50) (0xc002c04d20) Stream removed, broadcasting: 1 I0715 13:36:11.350138 6 log.go:172] (0xc001b74a50) (0xc0018b8000) Stream removed, broadcasting: 3 I0715 13:36:11.350147 6 log.go:172] (0xc001b74a50) (0xc003115a40) Stream removed, broadcasting: 5 Jul 15 13:36:11.350: INFO: Exec stderr: "" Jul 15 13:36:11.350: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.350: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.383544 6 log.go:172] (0xc001b753f0) (0xc002c05040) Create stream I0715 13:36:11.383591 6 log.go:172] (0xc001b753f0) (0xc002c05040) Stream added, broadcasting: 1 I0715 13:36:11.385639 6 log.go:172] (0xc001b753f0) Reply frame received for 1 I0715 13:36:11.385671 6 log.go:172] (0xc001b753f0) (0xc0018b81e0) Create stream I0715 13:36:11.385682 6 log.go:172] (0xc001b753f0) (0xc0018b81e0) Stream added, broadcasting: 3 I0715 13:36:11.386577 6 log.go:172] (0xc001b753f0) Reply frame received for 3 I0715 13:36:11.386598 6 log.go:172] (0xc001b753f0) (0xc002c050e0) Create stream I0715 13:36:11.386603 6 log.go:172] (0xc001b753f0) (0xc002c050e0) Stream added, broadcasting: 5 I0715 13:36:11.387413 6 log.go:172] (0xc001b753f0) Reply frame received for 5 I0715 13:36:11.440248 6 log.go:172] (0xc001b753f0) Data frame received for 5 I0715 13:36:11.440275 6 log.go:172] (0xc002c050e0) (5) Data frame handling I0715 13:36:11.440292 6 log.go:172] (0xc001b753f0) Data frame received for 3 I0715 13:36:11.440299 6 log.go:172] (0xc0018b81e0) (3) Data frame handling I0715 13:36:11.440312 6 log.go:172] (0xc0018b81e0) (3) Data frame sent I0715 13:36:11.440320 6 log.go:172] (0xc001b753f0) Data frame received for 3 I0715 13:36:11.440325 6 log.go:172] (0xc0018b81e0) (3) Data frame handling I0715 13:36:11.442069 6 log.go:172] (0xc001b753f0) Data frame received for 1 I0715 13:36:11.442111 6 log.go:172] (0xc002c05040) (1) Data frame handling I0715 13:36:11.442138 6 log.go:172] (0xc002c05040) (1) Data frame sent I0715 13:36:11.442155 6 log.go:172] (0xc001b753f0) (0xc002c05040) Stream removed, broadcasting: 1 I0715 13:36:11.442193 6 log.go:172] (0xc001b753f0) Go away received I0715 13:36:11.442486 6 log.go:172] (0xc001b753f0) (0xc002c05040) Stream removed, broadcasting: 1 I0715 13:36:11.442513 6 log.go:172] (0xc001b753f0) (0xc0018b81e0) Stream removed, broadcasting: 3 I0715 13:36:11.442526 6 log.go:172] (0xc001b753f0) (0xc002c050e0) Stream removed, broadcasting: 5 Jul 15 13:36:11.442: INFO: Exec stderr: "" Jul 15 13:36:11.442: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.442: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.480126 6 log.go:172] (0xc002598420) (0xc002c055e0) Create stream I0715 13:36:11.480155 6 log.go:172] (0xc002598420) (0xc002c055e0) Stream added, broadcasting: 1 I0715 13:36:11.482858 6 log.go:172] (0xc002598420) Reply frame received for 1 I0715 13:36:11.482893 6 log.go:172] (0xc002598420) (0xc001fbc000) Create stream I0715 13:36:11.482905 6 log.go:172] (0xc002598420) (0xc001fbc000) Stream added, broadcasting: 3 I0715 13:36:11.483829 6 log.go:172] (0xc002598420) Reply frame received for 3 I0715 13:36:11.483858 6 log.go:172] (0xc002598420) (0xc002c05680) Create stream I0715 13:36:11.483868 6 log.go:172] (0xc002598420) (0xc002c05680) Stream added, broadcasting: 5 I0715 13:36:11.484524 6 log.go:172] (0xc002598420) Reply frame received for 5 I0715 13:36:11.539794 6 log.go:172] (0xc002598420) Data frame received for 5 I0715 13:36:11.539830 6 log.go:172] (0xc002c05680) (5) Data frame handling I0715 13:36:11.539851 6 log.go:172] (0xc002598420) Data frame received for 3 I0715 13:36:11.539862 6 log.go:172] (0xc001fbc000) (3) Data frame handling I0715 13:36:11.539874 6 log.go:172] (0xc001fbc000) (3) Data frame sent I0715 13:36:11.539885 6 log.go:172] (0xc002598420) Data frame received for 3 I0715 13:36:11.539900 6 log.go:172] (0xc001fbc000) (3) Data frame handling I0715 13:36:11.541183 6 log.go:172] (0xc002598420) Data frame received for 1 I0715 13:36:11.541198 6 log.go:172] (0xc002c055e0) (1) Data frame handling I0715 13:36:11.541211 6 log.go:172] (0xc002c055e0) (1) Data frame sent I0715 13:36:11.541224 6 log.go:172] (0xc002598420) (0xc002c055e0) Stream removed, broadcasting: 1 I0715 13:36:11.541322 6 log.go:172] (0xc002598420) (0xc002c055e0) Stream removed, broadcasting: 1 I0715 13:36:11.541338 6 log.go:172] (0xc002598420) (0xc001fbc000) Stream removed, broadcasting: 3 I0715 13:36:11.541487 6 log.go:172] (0xc002598420) Go away received I0715 13:36:11.541530 6 log.go:172] (0xc002598420) (0xc002c05680) Stream removed, broadcasting: 5 Jul 15 13:36:11.541: INFO: Exec stderr: "" Jul 15 13:36:11.541: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.541: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.568110 6 log.go:172] (0xc000c26f20) (0xc001fbc320) Create stream I0715 13:36:11.568144 6 log.go:172] (0xc000c26f20) (0xc001fbc320) Stream added, broadcasting: 1 I0715 13:36:11.570510 6 log.go:172] (0xc000c26f20) Reply frame received for 1 I0715 13:36:11.570540 6 log.go:172] (0xc000c26f20) (0xc001fbc3c0) Create stream I0715 13:36:11.570550 6 log.go:172] (0xc000c26f20) (0xc001fbc3c0) Stream added, broadcasting: 3 I0715 13:36:11.571273 6 log.go:172] (0xc000c26f20) Reply frame received for 3 I0715 13:36:11.571309 6 log.go:172] (0xc000c26f20) (0xc002c05720) Create stream I0715 13:36:11.571321 6 log.go:172] (0xc000c26f20) (0xc002c05720) Stream added, broadcasting: 5 I0715 13:36:11.572071 6 log.go:172] (0xc000c26f20) Reply frame received for 5 I0715 13:36:11.623359 6 log.go:172] (0xc000c26f20) Data frame received for 3 I0715 13:36:11.623461 6 log.go:172] (0xc001fbc3c0) (3) Data frame handling I0715 13:36:11.623502 6 log.go:172] (0xc001fbc3c0) (3) Data frame sent I0715 13:36:11.623523 6 log.go:172] (0xc000c26f20) Data frame received for 3 I0715 13:36:11.623542 6 log.go:172] (0xc001fbc3c0) (3) Data frame handling I0715 13:36:11.623905 6 log.go:172] (0xc000c26f20) Data frame received for 5 I0715 13:36:11.623936 6 log.go:172] (0xc002c05720) (5) Data frame handling I0715 13:36:11.625841 6 log.go:172] (0xc000c26f20) Data frame received for 1 I0715 13:36:11.625883 6 log.go:172] (0xc001fbc320) (1) Data frame handling I0715 13:36:11.625914 6 log.go:172] (0xc001fbc320) (1) Data frame sent I0715 13:36:11.625944 6 log.go:172] (0xc000c26f20) (0xc001fbc320) Stream removed, broadcasting: 1 I0715 13:36:11.625995 6 log.go:172] (0xc000c26f20) Go away received I0715 13:36:11.626095 6 log.go:172] (0xc000c26f20) (0xc001fbc320) Stream removed, broadcasting: 1 I0715 13:36:11.626131 6 log.go:172] (0xc000c26f20) (0xc001fbc3c0) Stream removed, broadcasting: 3 I0715 13:36:11.626155 6 log.go:172] (0xc000c26f20) (0xc002c05720) Stream removed, broadcasting: 5 Jul 15 13:36:11.626: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 15 13:36:11.626: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.626: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.660495 6 log.go:172] (0xc00135eb00) (0xc001e0e640) Create stream I0715 13:36:11.660519 6 log.go:172] (0xc00135eb00) (0xc001e0e640) Stream added, broadcasting: 1 I0715 13:36:11.663047 6 log.go:172] (0xc00135eb00) Reply frame received for 1 I0715 13:36:11.663089 6 log.go:172] (0xc00135eb00) (0xc003115ae0) Create stream I0715 13:36:11.663103 6 log.go:172] (0xc00135eb00) (0xc003115ae0) Stream added, broadcasting: 3 I0715 13:36:11.664154 6 log.go:172] (0xc00135eb00) Reply frame received for 3 I0715 13:36:11.664182 6 log.go:172] (0xc00135eb00) (0xc0018b8280) Create stream I0715 13:36:11.664192 6 log.go:172] (0xc00135eb00) (0xc0018b8280) Stream added, broadcasting: 5 I0715 13:36:11.665138 6 log.go:172] (0xc00135eb00) Reply frame received for 5 I0715 13:36:11.722642 6 log.go:172] (0xc00135eb00) Data frame received for 3 I0715 13:36:11.722737 6 log.go:172] (0xc003115ae0) (3) Data frame handling I0715 13:36:11.722776 6 log.go:172] (0xc003115ae0) (3) Data frame sent I0715 13:36:11.722814 6 log.go:172] (0xc00135eb00) Data frame received for 3 I0715 13:36:11.722844 6 log.go:172] (0xc003115ae0) (3) Data frame handling I0715 13:36:11.722888 6 log.go:172] (0xc00135eb00) Data frame received for 5 I0715 13:36:11.722926 6 log.go:172] (0xc0018b8280) (5) Data frame handling I0715 13:36:11.724372 6 log.go:172] (0xc00135eb00) Data frame received for 1 I0715 13:36:11.724415 6 log.go:172] (0xc001e0e640) (1) Data frame handling I0715 13:36:11.724447 6 log.go:172] (0xc001e0e640) (1) Data frame sent I0715 13:36:11.724462 6 log.go:172] (0xc00135eb00) (0xc001e0e640) Stream removed, broadcasting: 1 I0715 13:36:11.724494 6 log.go:172] (0xc00135eb00) Go away received I0715 13:36:11.724673 6 log.go:172] (0xc00135eb00) (0xc001e0e640) Stream removed, broadcasting: 1 I0715 13:36:11.724864 6 log.go:172] (0xc00135eb00) (0xc003115ae0) Stream removed, broadcasting: 3 I0715 13:36:11.724901 6 log.go:172] (0xc00135eb00) (0xc0018b8280) Stream removed, broadcasting: 5 Jul 15 13:36:11.724: INFO: Exec stderr: "" Jul 15 13:36:11.724: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.725: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.759856 6 log.go:172] (0xc000c27970) (0xc001fbc6e0) Create stream I0715 13:36:11.759884 6 log.go:172] (0xc000c27970) (0xc001fbc6e0) Stream added, broadcasting: 1 I0715 13:36:11.762806 6 log.go:172] (0xc000c27970) Reply frame received for 1 I0715 13:36:11.762847 6 log.go:172] (0xc000c27970) (0xc001e0e6e0) Create stream I0715 13:36:11.762860 6 log.go:172] (0xc000c27970) (0xc001e0e6e0) Stream added, broadcasting: 3 I0715 13:36:11.763846 6 log.go:172] (0xc000c27970) Reply frame received for 3 I0715 13:36:11.763906 6 log.go:172] (0xc000c27970) (0xc003115b80) Create stream I0715 13:36:11.763922 6 log.go:172] (0xc000c27970) (0xc003115b80) Stream added, broadcasting: 5 I0715 13:36:11.764914 6 log.go:172] (0xc000c27970) Reply frame received for 5 I0715 13:36:11.821983 6 log.go:172] (0xc000c27970) Data frame received for 5 I0715 13:36:11.822037 6 log.go:172] (0xc003115b80) (5) Data frame handling I0715 13:36:11.822073 6 log.go:172] (0xc000c27970) Data frame received for 3 I0715 13:36:11.822092 6 log.go:172] (0xc001e0e6e0) (3) Data frame handling I0715 13:36:11.822119 6 log.go:172] (0xc001e0e6e0) (3) Data frame sent I0715 13:36:11.822138 6 log.go:172] (0xc000c27970) Data frame received for 3 I0715 13:36:11.822155 6 log.go:172] (0xc001e0e6e0) (3) Data frame handling I0715 13:36:11.823385 6 log.go:172] (0xc000c27970) Data frame received for 1 I0715 13:36:11.823413 6 log.go:172] (0xc001fbc6e0) (1) Data frame handling I0715 13:36:11.823423 6 log.go:172] (0xc001fbc6e0) (1) Data frame sent I0715 13:36:11.823436 6 log.go:172] (0xc000c27970) (0xc001fbc6e0) Stream removed, broadcasting: 1 I0715 13:36:11.823459 6 log.go:172] (0xc000c27970) Go away received I0715 13:36:11.823718 6 log.go:172] (0xc000c27970) (0xc001fbc6e0) Stream removed, broadcasting: 1 I0715 13:36:11.823739 6 log.go:172] (0xc000c27970) (0xc001e0e6e0) Stream removed, broadcasting: 3 I0715 13:36:11.823748 6 log.go:172] (0xc000c27970) (0xc003115b80) Stream removed, broadcasting: 5 Jul 15 13:36:11.823: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 15 13:36:11.823: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.823: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.853321 6 log.go:172] (0xc0024d0160) (0xc003115ea0) Create stream I0715 13:36:11.853355 6 log.go:172] (0xc0024d0160) (0xc003115ea0) Stream added, broadcasting: 1 I0715 13:36:11.855568 6 log.go:172] (0xc0024d0160) Reply frame received for 1 I0715 13:36:11.855604 6 log.go:172] (0xc0024d0160) (0xc001fbc780) Create stream I0715 13:36:11.855621 6 log.go:172] (0xc0024d0160) (0xc001fbc780) Stream added, broadcasting: 3 I0715 13:36:11.856713 6 log.go:172] (0xc0024d0160) Reply frame received for 3 I0715 13:36:11.856864 6 log.go:172] (0xc0024d0160) (0xc001fbc820) Create stream I0715 13:36:11.856888 6 log.go:172] (0xc0024d0160) (0xc001fbc820) Stream added, broadcasting: 5 I0715 13:36:11.857988 6 log.go:172] (0xc0024d0160) Reply frame received for 5 I0715 13:36:11.923490 6 log.go:172] (0xc0024d0160) Data frame received for 5 I0715 13:36:11.923533 6 log.go:172] (0xc001fbc820) (5) Data frame handling I0715 13:36:11.923557 6 log.go:172] (0xc0024d0160) Data frame received for 3 I0715 13:36:11.923573 6 log.go:172] (0xc001fbc780) (3) Data frame handling I0715 13:36:11.923589 6 log.go:172] (0xc001fbc780) (3) Data frame sent I0715 13:36:11.923621 6 log.go:172] (0xc0024d0160) Data frame received for 3 I0715 13:36:11.923639 6 log.go:172] (0xc001fbc780) (3) Data frame handling I0715 13:36:11.925486 6 log.go:172] (0xc0024d0160) Data frame received for 1 I0715 13:36:11.925502 6 log.go:172] (0xc003115ea0) (1) Data frame handling I0715 13:36:11.925508 6 log.go:172] (0xc003115ea0) (1) Data frame sent I0715 13:36:11.925516 6 log.go:172] (0xc0024d0160) (0xc003115ea0) Stream removed, broadcasting: 1 I0715 13:36:11.925570 6 log.go:172] (0xc0024d0160) Go away received I0715 13:36:11.925624 6 log.go:172] (0xc0024d0160) (0xc003115ea0) Stream removed, broadcasting: 1 I0715 13:36:11.925640 6 log.go:172] (0xc0024d0160) (0xc001fbc780) Stream removed, broadcasting: 3 I0715 13:36:11.925645 6 log.go:172] (0xc0024d0160) (0xc001fbc820) Stream removed, broadcasting: 5 Jul 15 13:36:11.925: INFO: Exec stderr: "" Jul 15 13:36:11.925: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:11.925: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:11.953806 6 log.go:172] (0xc002599ad0) (0xc002c05a40) Create stream I0715 13:36:11.953833 6 log.go:172] (0xc002599ad0) (0xc002c05a40) Stream added, broadcasting: 1 I0715 13:36:11.955840 6 log.go:172] (0xc002599ad0) Reply frame received for 1 I0715 13:36:11.955904 6 log.go:172] (0xc002599ad0) (0xc0018b8320) Create stream I0715 13:36:11.955921 6 log.go:172] (0xc002599ad0) (0xc0018b8320) Stream added, broadcasting: 3 I0715 13:36:11.956966 6 log.go:172] (0xc002599ad0) Reply frame received for 3 I0715 13:36:11.956996 6 log.go:172] (0xc002599ad0) (0xc001fbc8c0) Create stream I0715 13:36:11.957006 6 log.go:172] (0xc002599ad0) (0xc001fbc8c0) Stream added, broadcasting: 5 I0715 13:36:11.957962 6 log.go:172] (0xc002599ad0) Reply frame received for 5 I0715 13:36:12.010418 6 log.go:172] (0xc002599ad0) Data frame received for 3 I0715 13:36:12.010453 6 log.go:172] (0xc0018b8320) (3) Data frame handling I0715 13:36:12.010486 6 log.go:172] (0xc0018b8320) (3) Data frame sent I0715 13:36:12.010498 6 log.go:172] (0xc002599ad0) Data frame received for 3 I0715 13:36:12.010504 6 log.go:172] (0xc0018b8320) (3) Data frame handling I0715 13:36:12.010538 6 log.go:172] (0xc002599ad0) Data frame received for 5 I0715 13:36:12.010586 6 log.go:172] (0xc001fbc8c0) (5) Data frame handling I0715 13:36:12.012219 6 log.go:172] (0xc002599ad0) Data frame received for 1 I0715 13:36:12.012254 6 log.go:172] (0xc002c05a40) (1) Data frame handling I0715 13:36:12.012275 6 log.go:172] (0xc002c05a40) (1) Data frame sent I0715 13:36:12.012303 6 log.go:172] (0xc002599ad0) (0xc002c05a40) Stream removed, broadcasting: 1 I0715 13:36:12.012339 6 log.go:172] (0xc002599ad0) Go away received I0715 13:36:12.012475 6 log.go:172] (0xc002599ad0) (0xc002c05a40) Stream removed, broadcasting: 1 I0715 13:36:12.012496 6 log.go:172] (0xc002599ad0) (0xc0018b8320) Stream removed, broadcasting: 3 I0715 13:36:12.012508 6 log.go:172] (0xc002599ad0) (0xc001fbc8c0) Stream removed, broadcasting: 5 Jul 15 13:36:12.012: INFO: Exec stderr: "" Jul 15 13:36:12.012: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:12.012: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:12.050302 6 log.go:172] (0xc0027424d0) (0xc002c05d60) Create stream I0715 13:36:12.050331 6 log.go:172] (0xc0027424d0) (0xc002c05d60) Stream added, broadcasting: 1 I0715 13:36:12.052143 6 log.go:172] (0xc0027424d0) Reply frame received for 1 I0715 13:36:12.052162 6 log.go:172] (0xc0027424d0) (0xc0018b8460) Create stream I0715 13:36:12.052167 6 log.go:172] (0xc0027424d0) (0xc0018b8460) Stream added, broadcasting: 3 I0715 13:36:12.053014 6 log.go:172] (0xc0027424d0) Reply frame received for 3 I0715 13:36:12.053035 6 log.go:172] (0xc0027424d0) (0xc002c05e00) Create stream I0715 13:36:12.053042 6 log.go:172] (0xc0027424d0) (0xc002c05e00) Stream added, broadcasting: 5 I0715 13:36:12.053745 6 log.go:172] (0xc0027424d0) Reply frame received for 5 I0715 13:36:12.104156 6 log.go:172] (0xc0027424d0) Data frame received for 5 I0715 13:36:12.104200 6 log.go:172] (0xc002c05e00) (5) Data frame handling I0715 13:36:12.104226 6 log.go:172] (0xc0027424d0) Data frame received for 3 I0715 13:36:12.104239 6 log.go:172] (0xc0018b8460) (3) Data frame handling I0715 13:36:12.104255 6 log.go:172] (0xc0018b8460) (3) Data frame sent I0715 13:36:12.104267 6 log.go:172] (0xc0027424d0) Data frame received for 3 I0715 13:36:12.104278 6 log.go:172] (0xc0018b8460) (3) Data frame handling I0715 13:36:12.105928 6 log.go:172] (0xc0027424d0) Data frame received for 1 I0715 13:36:12.105953 6 log.go:172] (0xc002c05d60) (1) Data frame handling I0715 13:36:12.105985 6 log.go:172] (0xc002c05d60) (1) Data frame sent I0715 13:36:12.106235 6 log.go:172] (0xc0027424d0) (0xc002c05d60) Stream removed, broadcasting: 1 I0715 13:36:12.106290 6 log.go:172] (0xc0027424d0) Go away received I0715 13:36:12.106331 6 log.go:172] (0xc0027424d0) (0xc002c05d60) Stream removed, broadcasting: 1 I0715 13:36:12.106353 6 log.go:172] (0xc0027424d0) (0xc0018b8460) Stream removed, broadcasting: 3 I0715 13:36:12.106366 6 log.go:172] (0xc0027424d0) (0xc002c05e00) Stream removed, broadcasting: 5 Jul 15 13:36:12.106: INFO: Exec stderr: "" Jul 15 13:36:12.106: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7038 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:36:12.106: INFO: >>> kubeConfig: /root/.kube/config I0715 13:36:12.136547 6 log.go:172] (0xc0024d0bb0) (0xc0031ea1e0) Create stream I0715 13:36:12.136593 6 log.go:172] (0xc0024d0bb0) (0xc0031ea1e0) Stream added, broadcasting: 1 I0715 13:36:12.138786 6 log.go:172] (0xc0024d0bb0) Reply frame received for 1 I0715 13:36:12.138831 6 log.go:172] (0xc0024d0bb0) (0xc001e0e780) Create stream I0715 13:36:12.138846 6 log.go:172] (0xc0024d0bb0) (0xc001e0e780) Stream added, broadcasting: 3 I0715 13:36:12.139528 6 log.go:172] (0xc0024d0bb0) Reply frame received for 3 I0715 13:36:12.139585 6 log.go:172] (0xc0024d0bb0) (0xc001fbcaa0) Create stream I0715 13:36:12.139615 6 log.go:172] (0xc0024d0bb0) (0xc001fbcaa0) Stream added, broadcasting: 5 I0715 13:36:12.140346 6 log.go:172] (0xc0024d0bb0) Reply frame received for 5 I0715 13:36:12.215749 6 log.go:172] (0xc0024d0bb0) Data frame received for 5 I0715 13:36:12.215804 6 log.go:172] (0xc001fbcaa0) (5) Data frame handling I0715 13:36:12.215832 6 log.go:172] (0xc0024d0bb0) Data frame received for 3 I0715 13:36:12.215847 6 log.go:172] (0xc001e0e780) (3) Data frame handling I0715 13:36:12.215862 6 log.go:172] (0xc001e0e780) (3) Data frame sent I0715 13:36:12.215875 6 log.go:172] (0xc0024d0bb0) Data frame received for 3 I0715 13:36:12.215887 6 log.go:172] (0xc001e0e780) (3) Data frame handling I0715 13:36:12.217208 6 log.go:172] (0xc0024d0bb0) Data frame received for 1 I0715 13:36:12.217242 6 log.go:172] (0xc0031ea1e0) (1) Data frame handling I0715 13:36:12.217270 6 log.go:172] (0xc0031ea1e0) (1) Data frame sent I0715 13:36:12.217295 6 log.go:172] (0xc0024d0bb0) (0xc0031ea1e0) Stream removed, broadcasting: 1 I0715 13:36:12.217318 6 log.go:172] (0xc0024d0bb0) Go away received I0715 13:36:12.217401 6 log.go:172] (0xc0024d0bb0) (0xc0031ea1e0) Stream removed, broadcasting: 1 I0715 13:36:12.217424 6 log.go:172] (0xc0024d0bb0) (0xc001e0e780) Stream removed, broadcasting: 3 I0715 13:36:12.217435 6 log.go:172] (0xc0024d0bb0) (0xc001fbcaa0) Stream removed, broadcasting: 5 Jul 15 13:36:12.217: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:36:12.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7038" for this suite. Jul 15 13:37:02.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:37:02.306: INFO: namespace e2e-kubelet-etc-hosts-7038 deletion completed in 50.084195996s • [SLOW TEST:61.230 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:37:02.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:37:02.349: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:37:03.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5003" for this suite. Jul 15 13:37:09.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:37:09.536: INFO: namespace custom-resource-definition-5003 deletion completed in 6.087655092s • [SLOW TEST:7.230 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:37:09.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-e727f675-eaca-4873-89ef-55b8bc6b5b83 STEP: Creating secret with name s-test-opt-upd-46bfc9b9-dbbd-49e5-a997-777816727676 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e727f675-eaca-4873-89ef-55b8bc6b5b83 STEP: Updating secret s-test-opt-upd-46bfc9b9-dbbd-49e5-a997-777816727676 STEP: Creating secret with name s-test-opt-create-18f92a8e-dd67-4c34-be9e-31668b093a00 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:37:17.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3434" for this suite. Jul 15 13:37:40.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:37:40.287: INFO: namespace secrets-3434 deletion completed in 22.339891303s • [SLOW TEST:30.751 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:37:40.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jul 15 13:37:40.386: INFO: Waiting up to 5m0s for pod "client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd" in namespace "containers-6364" to be "success or failure" Jul 15 13:37:40.400: INFO: Pod "client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.616026ms Jul 15 13:37:42.404: INFO: Pod "client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017893254s Jul 15 13:37:44.408: INFO: Pod "client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021958643s STEP: Saw pod success Jul 15 13:37:44.408: INFO: Pod "client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd" satisfied condition "success or failure" Jul 15 13:37:44.411: INFO: Trying to get logs from node iruya-worker pod client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd container test-container: STEP: delete the pod Jul 15 13:37:44.452: INFO: Waiting for pod client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd to disappear Jul 15 13:37:44.477: INFO: Pod client-containers-68850315-e200-4a2f-b725-d2c0c325a4fd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:37:44.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6364" for this suite. Jul 15 13:37:50.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:37:50.568: INFO: namespace containers-6364 deletion completed in 6.088073938s • [SLOW TEST:10.281 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:37:50.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5742 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jul 15 13:37:50.712: INFO: Found 0 stateful pods, waiting for 3 Jul 15 13:38:00.717: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:38:00.717: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:38:00.717: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 15 13:38:10.717: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:38:10.717: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:38:10.717: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 15 13:38:10.744: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 15 13:38:20.785: INFO: Updating stateful set ss2 Jul 15 13:38:20.797: INFO: Waiting for Pod statefulset-5742/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jul 15 13:38:31.177: INFO: Found 2 stateful pods, waiting for 3 Jul 15 13:38:41.333: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:38:41.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:38:41.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 15 13:38:41.357: INFO: Updating stateful set ss2 Jul 15 13:38:41.405: INFO: Waiting for Pod statefulset-5742/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 15 13:38:51.430: INFO: Updating stateful set ss2 Jul 15 13:38:51.479: INFO: Waiting for StatefulSet statefulset-5742/ss2 to complete update Jul 15 13:38:51.479: INFO: Waiting for Pod statefulset-5742/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 15 13:39:01.487: INFO: Deleting all statefulset in ns statefulset-5742 Jul 15 13:39:01.490: INFO: Scaling statefulset ss2 to 0 Jul 15 13:39:21.523: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 13:39:21.526: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:39:21.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5742" for this suite. Jul 15 13:39:27.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:39:27.715: INFO: namespace statefulset-5742 deletion completed in 6.155791459s • [SLOW TEST:97.146 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:39:27.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 15 13:39:27.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4804' Jul 15 13:39:32.356: INFO: stderr: "" Jul 15 13:39:32.357: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 15 13:39:37.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4804 -o json' Jul 15 13:39:37.507: INFO: stderr: "" Jul 15 13:39:37.507: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-15T13:39:32Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4804\",\n \"resourceVersion\": \"1028328\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4804/pods/e2e-test-nginx-pod\",\n \"uid\": \"a1b406bb-ed40-43fa-887f-76db78b77f38\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-n4wdf\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-n4wdf\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-n4wdf\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-15T13:39:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-15T13:39:35Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-15T13:39:35Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-15T13:39:32Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://56dc80cec10e587283b062b925dc0cd69b77edf73737e2cf94604c9b93a46388\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-15T13:39:35Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.181\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-15T13:39:32Z\"\n }\n}\n" STEP: replace the image in the pod Jul 15 13:39:37.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4804' Jul 15 13:39:37.940: INFO: stderr: "" Jul 15 13:39:37.940: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jul 15 13:39:37.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4804' Jul 15 13:39:46.762: INFO: stderr: "" Jul 15 13:39:46.762: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:39:46.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4804" for this suite. Jul 15 13:39:52.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:39:52.851: INFO: namespace kubectl-4804 deletion completed in 6.083218236s • [SLOW TEST:25.136 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:39:52.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-510fe996-bb9a-407f-90bc-7c60cc83127c STEP: Creating a pod to test consume secrets Jul 15 13:39:52.991: INFO: Waiting up to 5m0s for pod "pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11" in namespace "secrets-4284" to be "success or failure" Jul 15 13:39:52.995: INFO: Pod "pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439835ms Jul 15 13:39:55.015: INFO: Pod "pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024570168s Jul 15 13:39:57.020: INFO: Pod "pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02899266s STEP: Saw pod success Jul 15 13:39:57.020: INFO: Pod "pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11" satisfied condition "success or failure" Jul 15 13:39:57.023: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11 container secret-volume-test: STEP: delete the pod Jul 15 13:39:57.038: INFO: Waiting for pod pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11 to disappear Jul 15 13:39:57.043: INFO: Pod pod-secrets-b47f1890-5cfd-4323-a6c2-d8d550cf0c11 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:39:57.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4284" for this suite. Jul 15 13:40:03.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:40:03.140: INFO: namespace secrets-4284 deletion completed in 6.094037883s STEP: Destroying namespace "secret-namespace-2636" for this suite. Jul 15 13:40:09.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:40:09.219: INFO: namespace secret-namespace-2636 deletion completed in 6.078773534s • [SLOW TEST:16.367 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:40:09.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-faf71a3e-ec18-4e5a-80f9-3fb6246838e9 STEP: Creating configMap with name cm-test-opt-upd-dcfec5a4-4b76-4bab-9406-536030d3193b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-faf71a3e-ec18-4e5a-80f9-3fb6246838e9 STEP: Updating configmap cm-test-opt-upd-dcfec5a4-4b76-4bab-9406-536030d3193b STEP: Creating configMap with name cm-test-opt-create-6dc77143-5a9e-4e51-8839-01eb105558ea STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:40:17.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2022" for this suite. Jul 15 13:40:39.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:40:39.478: INFO: namespace configmap-2022 deletion completed in 22.092152145s • [SLOW TEST:30.259 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:40:39.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 15 13:40:39.556: INFO: Waiting up to 5m0s for pod "pod-5f3ec753-1405-4e60-bb09-5c0453c812bf" in namespace "emptydir-6267" to be "success or failure" Jul 15 13:40:39.565: INFO: Pod "pod-5f3ec753-1405-4e60-bb09-5c0453c812bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735608ms Jul 15 13:40:41.569: INFO: Pod "pod-5f3ec753-1405-4e60-bb09-5c0453c812bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012896823s Jul 15 13:40:43.573: INFO: Pod "pod-5f3ec753-1405-4e60-bb09-5c0453c812bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016543865s STEP: Saw pod success Jul 15 13:40:43.573: INFO: Pod "pod-5f3ec753-1405-4e60-bb09-5c0453c812bf" satisfied condition "success or failure" Jul 15 13:40:43.575: INFO: Trying to get logs from node iruya-worker pod pod-5f3ec753-1405-4e60-bb09-5c0453c812bf container test-container: STEP: delete the pod Jul 15 13:40:43.687: INFO: Waiting for pod pod-5f3ec753-1405-4e60-bb09-5c0453c812bf to disappear Jul 15 13:40:43.697: INFO: Pod pod-5f3ec753-1405-4e60-bb09-5c0453c812bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:40:43.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6267" for this suite. Jul 15 13:40:49.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:40:49.807: INFO: namespace emptydir-6267 deletion completed in 6.107265779s • [SLOW TEST:10.328 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:40:49.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 15 13:40:49.912: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2234,SelfLink:/api/v1/namespaces/watch-2234/configmaps/e2e-watch-test-label-changed,UID:778731fe-3c51-41c1-8eac-55872dbe8642,ResourceVersion:1028615,Generation:0,CreationTimestamp:2020-07-15 13:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 15 13:40:49.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2234,SelfLink:/api/v1/namespaces/watch-2234/configmaps/e2e-watch-test-label-changed,UID:778731fe-3c51-41c1-8eac-55872dbe8642,ResourceVersion:1028616,Generation:0,CreationTimestamp:2020-07-15 13:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 15 13:40:49.913: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2234,SelfLink:/api/v1/namespaces/watch-2234/configmaps/e2e-watch-test-label-changed,UID:778731fe-3c51-41c1-8eac-55872dbe8642,ResourceVersion:1028617,Generation:0,CreationTimestamp:2020-07-15 13:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 15 13:40:59.957: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2234,SelfLink:/api/v1/namespaces/watch-2234/configmaps/e2e-watch-test-label-changed,UID:778731fe-3c51-41c1-8eac-55872dbe8642,ResourceVersion:1028638,Generation:0,CreationTimestamp:2020-07-15 13:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 15 13:40:59.957: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2234,SelfLink:/api/v1/namespaces/watch-2234/configmaps/e2e-watch-test-label-changed,UID:778731fe-3c51-41c1-8eac-55872dbe8642,ResourceVersion:1028639,Generation:0,CreationTimestamp:2020-07-15 13:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jul 15 13:40:59.958: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2234,SelfLink:/api/v1/namespaces/watch-2234/configmaps/e2e-watch-test-label-changed,UID:778731fe-3c51-41c1-8eac-55872dbe8642,ResourceVersion:1028640,Generation:0,CreationTimestamp:2020-07-15 13:40:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:40:59.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2234" for this suite. Jul 15 13:41:05.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:41:06.082: INFO: namespace watch-2234 deletion completed in 6.110784325s • [SLOW TEST:16.275 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:41:06.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jul 15 13:41:06.109: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jul 15 13:41:06.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2443' Jul 15 13:41:06.390: INFO: stderr: "" Jul 15 13:41:06.390: INFO: stdout: "service/redis-slave created\n" Jul 15 13:41:06.390: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jul 15 13:41:06.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2443' Jul 15 13:41:06.677: INFO: stderr: "" Jul 15 13:41:06.677: INFO: stdout: "service/redis-master created\n" Jul 15 13:41:06.677: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 15 13:41:06.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2443' Jul 15 13:41:06.965: INFO: stderr: "" Jul 15 13:41:06.965: INFO: stdout: "service/frontend created\n" Jul 15 13:41:06.965: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jul 15 13:41:06.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2443' Jul 15 13:41:07.250: INFO: stderr: "" Jul 15 13:41:07.251: INFO: stdout: "deployment.apps/frontend created\n" Jul 15 13:41:07.251: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 15 13:41:07.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2443' Jul 15 13:41:07.613: INFO: stderr: "" Jul 15 13:41:07.613: INFO: stdout: "deployment.apps/redis-master created\n" Jul 15 13:41:07.613: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jul 15 13:41:07.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2443' Jul 15 13:41:07.900: INFO: stderr: "" Jul 15 13:41:07.900: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jul 15 13:41:07.900: INFO: Waiting for all frontend pods to be Running. Jul 15 13:41:17.951: INFO: Waiting for frontend to serve content. Jul 15 13:41:18.006: INFO: Trying to add a new entry to the guestbook. Jul 15 13:41:18.022: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 15 13:41:18.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2443' Jul 15 13:41:18.200: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:41:18.200: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jul 15 13:41:18.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2443' Jul 15 13:41:18.500: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:41:18.500: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 15 13:41:18.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2443' Jul 15 13:41:18.654: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:41:18.654: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 15 13:41:18.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2443' Jul 15 13:41:18.786: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:41:18.786: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 15 13:41:18.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2443' Jul 15 13:41:18.938: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:41:18.938: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 15 13:41:18.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2443' Jul 15 13:41:19.144: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:41:19.144: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:41:19.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2443" for this suite. Jul 15 13:41:57.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:41:57.390: INFO: namespace kubectl-2443 deletion completed in 38.151775443s • [SLOW TEST:51.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:41:57.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-44be88ca-f9eb-435d-83fa-2abff991ce12 STEP: Creating a pod to test consume configMaps Jul 15 13:41:57.496: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0" in namespace "projected-8171" to be "success or failure" Jul 15 13:41:57.500: INFO: Pod "pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.595559ms Jul 15 13:41:59.504: INFO: Pod "pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0077182s Jul 15 13:42:01.509: INFO: Pod "pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012168784s STEP: Saw pod success Jul 15 13:42:01.509: INFO: Pod "pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0" satisfied condition "success or failure" Jul 15 13:42:01.511: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0 container projected-configmap-volume-test: STEP: delete the pod Jul 15 13:42:01.623: INFO: Waiting for pod pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0 to disappear Jul 15 13:42:01.819: INFO: Pod pod-projected-configmaps-eb9a9d1a-04e6-4950-b3da-2e405372d4a0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:42:01.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8171" for this suite. Jul 15 13:42:07.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:42:07.919: INFO: namespace projected-8171 deletion completed in 6.096138829s • [SLOW TEST:10.528 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:42:07.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 15 13:42:07.971: INFO: Waiting up to 5m0s for pod "pod-d23328ed-a81f-4126-a527-ef0df4730105" in namespace "emptydir-5439" to be "success or failure" Jul 15 13:42:08.022: INFO: Pod "pod-d23328ed-a81f-4126-a527-ef0df4730105": Phase="Pending", Reason="", readiness=false. Elapsed: 51.264193ms Jul 15 13:42:10.125: INFO: Pod "pod-d23328ed-a81f-4126-a527-ef0df4730105": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154326812s Jul 15 13:42:12.202: INFO: Pod "pod-d23328ed-a81f-4126-a527-ef0df4730105": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231007358s STEP: Saw pod success Jul 15 13:42:12.202: INFO: Pod "pod-d23328ed-a81f-4126-a527-ef0df4730105" satisfied condition "success or failure" Jul 15 13:42:12.204: INFO: Trying to get logs from node iruya-worker pod pod-d23328ed-a81f-4126-a527-ef0df4730105 container test-container: STEP: delete the pod Jul 15 13:42:12.221: INFO: Waiting for pod pod-d23328ed-a81f-4126-a527-ef0df4730105 to disappear Jul 15 13:42:12.281: INFO: Pod pod-d23328ed-a81f-4126-a527-ef0df4730105 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:42:12.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5439" for this suite. Jul 15 13:42:18.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:42:18.509: INFO: namespace emptydir-5439 deletion completed in 6.225227938s • [SLOW TEST:10.590 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:42:18.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:42:23.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9367" for this suite. Jul 15 13:42:45.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:42:45.742: INFO: namespace replication-controller-9367 deletion completed in 22.08885591s • [SLOW TEST:27.233 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:42:45.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-ng4t STEP: Creating a pod to test atomic-volume-subpath Jul 15 13:42:45.835: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ng4t" in namespace "subpath-1454" to be "success or failure" Jul 15 13:42:45.856: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Pending", Reason="", readiness=false. Elapsed: 21.56611ms Jul 15 13:42:47.860: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02498452s Jul 15 13:42:49.863: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 4.028872702s Jul 15 13:42:51.868: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 6.033660025s Jul 15 13:42:53.873: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 8.038196304s Jul 15 13:42:55.877: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 10.042385803s Jul 15 13:42:57.881: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 12.046286437s Jul 15 13:42:59.885: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 14.050606138s Jul 15 13:43:01.890: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 16.05509006s Jul 15 13:43:03.894: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 18.0595055s Jul 15 13:43:05.898: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 20.063822044s Jul 15 13:43:07.903: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 22.068214413s Jul 15 13:43:09.907: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Running", Reason="", readiness=true. Elapsed: 24.072765819s Jul 15 13:43:11.912: INFO: Pod "pod-subpath-test-projected-ng4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077249863s STEP: Saw pod success Jul 15 13:43:11.912: INFO: Pod "pod-subpath-test-projected-ng4t" satisfied condition "success or failure" Jul 15 13:43:11.916: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-ng4t container test-container-subpath-projected-ng4t: STEP: delete the pod Jul 15 13:43:11.947: INFO: Waiting for pod pod-subpath-test-projected-ng4t to disappear Jul 15 13:43:11.957: INFO: Pod pod-subpath-test-projected-ng4t no longer exists STEP: Deleting pod pod-subpath-test-projected-ng4t Jul 15 13:43:11.957: INFO: Deleting pod "pod-subpath-test-projected-ng4t" in namespace "subpath-1454" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:43:11.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1454" for this suite. Jul 15 13:43:17.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:43:18.050: INFO: namespace subpath-1454 deletion completed in 6.088109887s • [SLOW TEST:32.308 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:43:18.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-9427 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9427 to expose endpoints map[] Jul 15 13:43:18.202: INFO: Get endpoints failed (17.433954ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 15 13:43:19.206: INFO: successfully validated that service endpoint-test2 in namespace services-9427 exposes endpoints map[] (1.021034922s elapsed) STEP: Creating pod pod1 in namespace services-9427 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9427 to expose endpoints map[pod1:[80]] Jul 15 13:43:23.314: INFO: successfully validated that service endpoint-test2 in namespace services-9427 exposes endpoints map[pod1:[80]] (4.100428126s elapsed) STEP: Creating pod pod2 in namespace services-9427 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9427 to expose endpoints map[pod1:[80] pod2:[80]] Jul 15 13:43:27.401: INFO: successfully validated that service endpoint-test2 in namespace services-9427 exposes endpoints map[pod1:[80] pod2:[80]] (4.084623199s elapsed) STEP: Deleting pod pod1 in namespace services-9427 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9427 to expose endpoints map[pod2:[80]] Jul 15 13:43:28.432: INFO: successfully validated that service endpoint-test2 in namespace services-9427 exposes endpoints map[pod2:[80]] (1.025579622s elapsed) STEP: Deleting pod pod2 in namespace services-9427 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9427 to expose endpoints map[] Jul 15 13:43:29.452: INFO: successfully validated that service endpoint-test2 in namespace services-9427 exposes endpoints map[] (1.015473772s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:43:29.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9427" for this suite. Jul 15 13:43:51.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:43:51.733: INFO: namespace services-9427 deletion completed in 22.096037161s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.682 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:43:51.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jul 15 13:43:51.819: INFO: PodSpec: initContainers in spec.initContainers Jul 15 13:44:41.897: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b43cc1bc-77ad-4ba3-b119-03c6001a9235", GenerateName:"", Namespace:"init-container-1250", SelfLink:"/api/v1/namespaces/init-container-1250/pods/pod-init-b43cc1bc-77ad-4ba3-b119-03c6001a9235", UID:"b2645b7e-005e-4acd-ae8e-1b821839867e", ResourceVersion:"1029506", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730417431, loc:(*time.Location)(0x7eb18c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"819114318"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-w62gp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0022d4e40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w62gp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w62gp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-w62gp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001474918), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023d98c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014749a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014749c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0014749c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0014749cc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417431, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417431, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417431, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417431, loc:(*time.Location)(0x7eb18c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.1.189", StartTime:(*v1.Time)(0xc001f5ed00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001f5ed60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ca0700)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d8ddf3ea91d4035dcdd85ef68432d7572f3b209258d8d15461e8aa6026437748"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f5eda0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f5ed20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:44:41.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1250" for this suite. Jul 15 13:45:03.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:45:04.006: INFO: namespace init-container-1250 deletion completed in 22.084571855s • [SLOW TEST:72.272 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:45:04.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 15 13:45:04.131: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:04.139: INFO: Number of nodes with available pods: 0 Jul 15 13:45:04.139: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:45:05.143: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:05.145: INFO: Number of nodes with available pods: 0 Jul 15 13:45:05.145: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:45:06.193: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:06.196: INFO: Number of nodes with available pods: 0 Jul 15 13:45:06.197: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:45:07.168: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:07.172: INFO: Number of nodes with available pods: 0 Jul 15 13:45:07.172: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:45:08.146: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:08.149: INFO: Number of nodes with available pods: 1 Jul 15 13:45:08.149: INFO: Node iruya-worker is running more than one daemon pod Jul 15 13:45:09.145: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:09.148: INFO: Number of nodes with available pods: 2 Jul 15 13:45:09.148: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 15 13:45:09.164: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:09.167: INFO: Number of nodes with available pods: 1 Jul 15 13:45:09.167: INFO: Node iruya-worker2 is running more than one daemon pod Jul 15 13:45:10.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:10.177: INFO: Number of nodes with available pods: 1 Jul 15 13:45:10.177: INFO: Node iruya-worker2 is running more than one daemon pod Jul 15 13:45:11.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:11.176: INFO: Number of nodes with available pods: 1 Jul 15 13:45:11.176: INFO: Node iruya-worker2 is running more than one daemon pod Jul 15 13:45:12.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:12.176: INFO: Number of nodes with available pods: 1 Jul 15 13:45:12.176: INFO: Node iruya-worker2 is running more than one daemon pod Jul 15 13:45:13.178: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:13.181: INFO: Number of nodes with available pods: 1 Jul 15 13:45:13.181: INFO: Node iruya-worker2 is running more than one daemon pod Jul 15 13:45:14.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:14.177: INFO: Number of nodes with available pods: 1 Jul 15 13:45:14.177: INFO: Node iruya-worker2 is running more than one daemon pod Jul 15 13:45:15.172: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:15.176: INFO: Number of nodes with available pods: 1 Jul 15 13:45:15.176: INFO: Node iruya-worker2 is running more than one daemon pod Jul 15 13:45:16.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 13:45:16.176: INFO: Number of nodes with available pods: 2 Jul 15 13:45:16.177: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6858, will wait for the garbage collector to delete the pods Jul 15 13:45:16.239: INFO: Deleting DaemonSet.extensions daemon-set took: 6.668634ms Jul 15 13:45:16.540: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.292115ms Jul 15 13:45:26.843: INFO: Number of nodes with available pods: 0 Jul 15 13:45:26.843: INFO: Number of running nodes: 0, number of available pods: 0 Jul 15 13:45:26.847: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6858/daemonsets","resourceVersion":"1029687"},"items":null} Jul 15 13:45:26.849: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6858/pods","resourceVersion":"1029687"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:45:26.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6858" for this suite. Jul 15 13:45:32.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:45:32.980: INFO: namespace daemonsets-6858 deletion completed in 6.116660205s • [SLOW TEST:28.974 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:45:32.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 15 13:45:33.095: INFO: Waiting up to 5m0s for pod "pod-738956f6-d3b4-4144-92c3-a86a7b2496f7" in namespace "emptydir-9756" to be "success or failure" Jul 15 13:45:33.121: INFO: Pod "pod-738956f6-d3b4-4144-92c3-a86a7b2496f7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.895485ms Jul 15 13:45:35.124: INFO: Pod "pod-738956f6-d3b4-4144-92c3-a86a7b2496f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029373697s Jul 15 13:45:37.128: INFO: Pod "pod-738956f6-d3b4-4144-92c3-a86a7b2496f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033492474s STEP: Saw pod success Jul 15 13:45:37.128: INFO: Pod "pod-738956f6-d3b4-4144-92c3-a86a7b2496f7" satisfied condition "success or failure" Jul 15 13:45:37.132: INFO: Trying to get logs from node iruya-worker2 pod pod-738956f6-d3b4-4144-92c3-a86a7b2496f7 container test-container: STEP: delete the pod Jul 15 13:45:37.147: INFO: Waiting for pod pod-738956f6-d3b4-4144-92c3-a86a7b2496f7 to disappear Jul 15 13:45:37.154: INFO: Pod pod-738956f6-d3b4-4144-92c3-a86a7b2496f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:45:37.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9756" for this suite. Jul 15 13:45:43.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:45:43.247: INFO: namespace emptydir-9756 deletion completed in 6.089304126s • [SLOW TEST:10.266 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:45:43.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jul 15 13:45:43.316: INFO: Waiting up to 5m0s for pod "var-expansion-2dbac188-5082-4af7-a352-b97a8054455b" in namespace "var-expansion-5334" to be "success or failure" Jul 15 13:45:43.320: INFO: Pod "var-expansion-2dbac188-5082-4af7-a352-b97a8054455b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.99183ms Jul 15 13:45:45.414: INFO: Pod "var-expansion-2dbac188-5082-4af7-a352-b97a8054455b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097708419s Jul 15 13:45:47.418: INFO: Pod "var-expansion-2dbac188-5082-4af7-a352-b97a8054455b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10162485s STEP: Saw pod success Jul 15 13:45:47.418: INFO: Pod "var-expansion-2dbac188-5082-4af7-a352-b97a8054455b" satisfied condition "success or failure" Jul 15 13:45:47.419: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-2dbac188-5082-4af7-a352-b97a8054455b container dapi-container: STEP: delete the pod Jul 15 13:45:47.460: INFO: Waiting for pod var-expansion-2dbac188-5082-4af7-a352-b97a8054455b to disappear Jul 15 13:45:47.470: INFO: Pod var-expansion-2dbac188-5082-4af7-a352-b97a8054455b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:45:47.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5334" for this suite. Jul 15 13:45:53.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:45:53.559: INFO: namespace var-expansion-5334 deletion completed in 6.085877127s • [SLOW TEST:10.312 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:45:53.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jul 15 13:45:53.663: INFO: Waiting up to 5m0s for pod "pod-274d4baa-0d4d-4221-8f51-cb3317881092" in namespace "emptydir-6842" to be "success or failure" Jul 15 13:45:53.683: INFO: Pod "pod-274d4baa-0d4d-4221-8f51-cb3317881092": Phase="Pending", Reason="", readiness=false. Elapsed: 19.379475ms Jul 15 13:45:55.687: INFO: Pod "pod-274d4baa-0d4d-4221-8f51-cb3317881092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023706611s Jul 15 13:45:57.691: INFO: Pod "pod-274d4baa-0d4d-4221-8f51-cb3317881092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027324813s STEP: Saw pod success Jul 15 13:45:57.691: INFO: Pod "pod-274d4baa-0d4d-4221-8f51-cb3317881092" satisfied condition "success or failure" Jul 15 13:45:57.693: INFO: Trying to get logs from node iruya-worker2 pod pod-274d4baa-0d4d-4221-8f51-cb3317881092 container test-container: STEP: delete the pod Jul 15 13:45:57.717: INFO: Waiting for pod pod-274d4baa-0d4d-4221-8f51-cb3317881092 to disappear Jul 15 13:45:57.761: INFO: Pod pod-274d4baa-0d4d-4221-8f51-cb3317881092 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:45:57.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6842" for this suite. Jul 15 13:46:03.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:46:03.896: INFO: namespace emptydir-6842 deletion completed in 6.131328862s • [SLOW TEST:10.336 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:46:03.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 15 13:46:03.984: INFO: Waiting up to 5m0s for pod "downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e" in namespace "downward-api-3337" to be "success or failure" Jul 15 13:46:03.998: INFO: Pod "downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.497379ms Jul 15 13:46:06.001: INFO: Pod "downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017402859s Jul 15 13:46:08.005: INFO: Pod "downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021316075s STEP: Saw pod success Jul 15 13:46:08.005: INFO: Pod "downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e" satisfied condition "success or failure" Jul 15 13:46:08.008: INFO: Trying to get logs from node iruya-worker2 pod downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e container dapi-container: STEP: delete the pod Jul 15 13:46:08.066: INFO: Waiting for pod downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e to disappear Jul 15 13:46:08.093: INFO: Pod downward-api-7fdf92cf-ece5-4568-9941-a1cdf95c2a1e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:46:08.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3337" for this suite. Jul 15 13:46:14.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:46:14.425: INFO: namespace downward-api-3337 deletion completed in 6.329033118s • [SLOW TEST:10.529 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:46:14.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8642 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 15 13:46:14.489: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 15 13:46:48.664: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.87:8080/dial?request=hostName&protocol=http&host=10.244.1.196&port=8080&tries=1'] Namespace:pod-network-test-8642 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:46:48.664: INFO: >>> kubeConfig: /root/.kube/config I0715 13:46:48.703998 6 log.go:172] (0xc001efd3f0) (0xc0017d5cc0) Create stream I0715 13:46:48.704037 6 log.go:172] (0xc001efd3f0) (0xc0017d5cc0) Stream added, broadcasting: 1 I0715 13:46:48.706563 6 log.go:172] (0xc001efd3f0) Reply frame received for 1 I0715 13:46:48.706607 6 log.go:172] (0xc001efd3f0) (0xc0019437c0) Create stream I0715 13:46:48.706617 6 log.go:172] (0xc001efd3f0) (0xc0019437c0) Stream added, broadcasting: 3 I0715 13:46:48.707627 6 log.go:172] (0xc001efd3f0) Reply frame received for 3 I0715 13:46:48.707668 6 log.go:172] (0xc001efd3f0) (0xc0017d5d60) Create stream I0715 13:46:48.707684 6 log.go:172] (0xc001efd3f0) (0xc0017d5d60) Stream added, broadcasting: 5 I0715 13:46:48.708702 6 log.go:172] (0xc001efd3f0) Reply frame received for 5 I0715 13:46:48.792311 6 log.go:172] (0xc001efd3f0) Data frame received for 3 I0715 13:46:48.792360 6 log.go:172] (0xc0019437c0) (3) Data frame handling I0715 13:46:48.792377 6 log.go:172] (0xc0019437c0) (3) Data frame sent I0715 13:46:48.793254 6 log.go:172] (0xc001efd3f0) Data frame received for 3 I0715 13:46:48.793277 6 log.go:172] (0xc0019437c0) (3) Data frame handling I0715 13:46:48.793458 6 log.go:172] (0xc001efd3f0) Data frame received for 5 I0715 13:46:48.793478 6 log.go:172] (0xc0017d5d60) (5) Data frame handling I0715 13:46:48.795316 6 log.go:172] (0xc001efd3f0) Data frame received for 1 I0715 13:46:48.795345 6 log.go:172] (0xc0017d5cc0) (1) Data frame handling I0715 13:46:48.795359 6 log.go:172] (0xc0017d5cc0) (1) Data frame sent I0715 13:46:48.795376 6 log.go:172] (0xc001efd3f0) (0xc0017d5cc0) Stream removed, broadcasting: 1 I0715 13:46:48.795402 6 log.go:172] (0xc001efd3f0) Go away received I0715 13:46:48.795542 6 log.go:172] (0xc001efd3f0) (0xc0017d5cc0) Stream removed, broadcasting: 1 I0715 13:46:48.795568 6 log.go:172] (0xc001efd3f0) (0xc0019437c0) Stream removed, broadcasting: 3 I0715 13:46:48.795587 6 log.go:172] (0xc001efd3f0) (0xc0017d5d60) Stream removed, broadcasting: 5 Jul 15 13:46:48.795: INFO: Waiting for endpoints: map[] Jul 15 13:46:48.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.87:8080/dial?request=hostName&protocol=http&host=10.244.2.86&port=8080&tries=1'] Namespace:pod-network-test-8642 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 13:46:48.813: INFO: >>> kubeConfig: /root/.kube/config I0715 13:46:48.854332 6 log.go:172] (0xc00262ba20) (0xc003099900) Create stream I0715 13:46:48.854363 6 log.go:172] (0xc00262ba20) (0xc003099900) Stream added, broadcasting: 1 I0715 13:46:48.857343 6 log.go:172] (0xc00262ba20) Reply frame received for 1 I0715 13:46:48.857391 6 log.go:172] (0xc00262ba20) (0xc0030999a0) Create stream I0715 13:46:48.857413 6 log.go:172] (0xc00262ba20) (0xc0030999a0) Stream added, broadcasting: 3 I0715 13:46:48.858493 6 log.go:172] (0xc00262ba20) Reply frame received for 3 I0715 13:46:48.858544 6 log.go:172] (0xc00262ba20) (0xc002652500) Create stream I0715 13:46:48.858561 6 log.go:172] (0xc00262ba20) (0xc002652500) Stream added, broadcasting: 5 I0715 13:46:48.859659 6 log.go:172] (0xc00262ba20) Reply frame received for 5 I0715 13:46:48.927893 6 log.go:172] (0xc00262ba20) Data frame received for 3 I0715 13:46:48.927926 6 log.go:172] (0xc0030999a0) (3) Data frame handling I0715 13:46:48.927947 6 log.go:172] (0xc0030999a0) (3) Data frame sent I0715 13:46:48.928657 6 log.go:172] (0xc00262ba20) Data frame received for 3 I0715 13:46:48.928699 6 log.go:172] (0xc0030999a0) (3) Data frame handling I0715 13:46:48.928759 6 log.go:172] (0xc00262ba20) Data frame received for 5 I0715 13:46:48.928772 6 log.go:172] (0xc002652500) (5) Data frame handling I0715 13:46:48.930457 6 log.go:172] (0xc00262ba20) Data frame received for 1 I0715 13:46:48.930469 6 log.go:172] (0xc003099900) (1) Data frame handling I0715 13:46:48.930480 6 log.go:172] (0xc003099900) (1) Data frame sent I0715 13:46:48.930487 6 log.go:172] (0xc00262ba20) (0xc003099900) Stream removed, broadcasting: 1 I0715 13:46:48.930498 6 log.go:172] (0xc00262ba20) Go away received I0715 13:46:48.930650 6 log.go:172] (0xc00262ba20) (0xc003099900) Stream removed, broadcasting: 1 I0715 13:46:48.930688 6 log.go:172] (0xc00262ba20) (0xc0030999a0) Stream removed, broadcasting: 3 I0715 13:46:48.930695 6 log.go:172] (0xc00262ba20) (0xc002652500) Stream removed, broadcasting: 5 Jul 15 13:46:48.930: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:46:48.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8642" for this suite. Jul 15 13:47:12.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:47:13.022: INFO: namespace pod-network-test-8642 deletion completed in 24.086550074s • [SLOW TEST:58.596 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:47:13.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 15 13:47:13.157: INFO: Waiting up to 5m0s for pod "pod-a4866cde-e2c9-40ff-986d-734f40086dee" in namespace "emptydir-8357" to be "success or failure" Jul 15 13:47:13.160: INFO: Pod "pod-a4866cde-e2c9-40ff-986d-734f40086dee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.742668ms Jul 15 13:47:15.164: INFO: Pod "pod-a4866cde-e2c9-40ff-986d-734f40086dee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007317571s Jul 15 13:47:17.168: INFO: Pod "pod-a4866cde-e2c9-40ff-986d-734f40086dee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011744557s STEP: Saw pod success Jul 15 13:47:17.168: INFO: Pod "pod-a4866cde-e2c9-40ff-986d-734f40086dee" satisfied condition "success or failure" Jul 15 13:47:17.171: INFO: Trying to get logs from node iruya-worker pod pod-a4866cde-e2c9-40ff-986d-734f40086dee container test-container: STEP: delete the pod Jul 15 13:47:17.396: INFO: Waiting for pod pod-a4866cde-e2c9-40ff-986d-734f40086dee to disappear Jul 15 13:47:17.480: INFO: Pod pod-a4866cde-e2c9-40ff-986d-734f40086dee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:47:17.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8357" for this suite. Jul 15 13:47:23.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:47:23.583: INFO: namespace emptydir-8357 deletion completed in 6.099505304s • [SLOW TEST:10.560 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:47:23.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8547 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jul 15 13:47:23.651: INFO: Found 0 stateful pods, waiting for 3 Jul 15 13:47:33.657: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:47:33.657: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:47:33.657: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jul 15 13:47:43.657: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:47:43.657: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:47:43.658: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 15 13:47:43.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8547 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 13:47:43.957: INFO: stderr: "I0715 13:47:43.815604 1881 log.go:172] (0xc0009d8420) (0xc000744820) Create stream\nI0715 13:47:43.815659 1881 log.go:172] (0xc0009d8420) (0xc000744820) Stream added, broadcasting: 1\nI0715 13:47:43.821110 1881 log.go:172] (0xc0009d8420) Reply frame received for 1\nI0715 13:47:43.821148 1881 log.go:172] (0xc0009d8420) (0xc000744000) Create stream\nI0715 13:47:43.821160 1881 log.go:172] (0xc0009d8420) (0xc000744000) Stream added, broadcasting: 3\nI0715 13:47:43.822143 1881 log.go:172] (0xc0009d8420) Reply frame received for 3\nI0715 13:47:43.822177 1881 log.go:172] (0xc0009d8420) (0xc0005f41e0) Create stream\nI0715 13:47:43.822186 1881 log.go:172] (0xc0009d8420) (0xc0005f41e0) Stream added, broadcasting: 5\nI0715 13:47:43.823170 1881 log.go:172] (0xc0009d8420) Reply frame received for 5\nI0715 13:47:43.904869 1881 log.go:172] (0xc0009d8420) Data frame received for 5\nI0715 13:47:43.904895 1881 log.go:172] (0xc0005f41e0) (5) Data frame handling\nI0715 13:47:43.904913 1881 log.go:172] (0xc0005f41e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 13:47:43.949282 1881 log.go:172] (0xc0009d8420) Data frame received for 3\nI0715 13:47:43.949321 1881 log.go:172] (0xc000744000) (3) Data frame handling\nI0715 13:47:43.949377 1881 log.go:172] (0xc000744000) (3) Data frame sent\nI0715 13:47:43.949594 1881 log.go:172] (0xc0009d8420) Data frame received for 5\nI0715 13:47:43.949613 1881 log.go:172] (0xc0005f41e0) (5) Data frame handling\nI0715 13:47:43.949649 1881 log.go:172] (0xc0009d8420) Data frame received for 3\nI0715 13:47:43.949700 1881 log.go:172] (0xc000744000) (3) Data frame handling\nI0715 13:47:43.951365 1881 log.go:172] (0xc0009d8420) Data frame received for 1\nI0715 13:47:43.951389 1881 log.go:172] (0xc000744820) (1) Data frame handling\nI0715 13:47:43.951402 1881 log.go:172] (0xc000744820) (1) Data frame sent\nI0715 13:47:43.951424 1881 log.go:172] (0xc0009d8420) (0xc000744820) Stream removed, broadcasting: 1\nI0715 13:47:43.951444 1881 log.go:172] (0xc0009d8420) Go away received\nI0715 13:47:43.951907 1881 log.go:172] (0xc0009d8420) (0xc000744820) Stream removed, broadcasting: 1\nI0715 13:47:43.951932 1881 log.go:172] (0xc0009d8420) (0xc000744000) Stream removed, broadcasting: 3\nI0715 13:47:43.951943 1881 log.go:172] (0xc0009d8420) (0xc0005f41e0) Stream removed, broadcasting: 5\n" Jul 15 13:47:43.957: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 13:47:43.957: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 15 13:47:54.012: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 15 13:48:04.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8547 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 13:48:04.309: INFO: stderr: "I0715 13:48:04.221196 1902 log.go:172] (0xc0001246e0) (0xc00031a6e0) Create stream\nI0715 13:48:04.221271 1902 log.go:172] (0xc0001246e0) (0xc00031a6e0) Stream added, broadcasting: 1\nI0715 13:48:04.224234 1902 log.go:172] (0xc0001246e0) Reply frame received for 1\nI0715 13:48:04.224280 1902 log.go:172] (0xc0001246e0) (0xc00031a780) Create stream\nI0715 13:48:04.224302 1902 log.go:172] (0xc0001246e0) (0xc00031a780) Stream added, broadcasting: 3\nI0715 13:48:04.225464 1902 log.go:172] (0xc0001246e0) Reply frame received for 3\nI0715 13:48:04.225507 1902 log.go:172] (0xc0001246e0) (0xc00037e320) Create stream\nI0715 13:48:04.225521 1902 log.go:172] (0xc0001246e0) (0xc00037e320) Stream added, broadcasting: 5\nI0715 13:48:04.226618 1902 log.go:172] (0xc0001246e0) Reply frame received for 5\nI0715 13:48:04.301534 1902 log.go:172] (0xc0001246e0) Data frame received for 3\nI0715 13:48:04.301591 1902 log.go:172] (0xc00031a780) (3) Data frame handling\nI0715 13:48:04.301618 1902 log.go:172] (0xc00031a780) (3) Data frame sent\nI0715 13:48:04.301638 1902 log.go:172] (0xc0001246e0) Data frame received for 3\nI0715 13:48:04.301658 1902 log.go:172] (0xc00031a780) (3) Data frame handling\nI0715 13:48:04.301710 1902 log.go:172] (0xc0001246e0) Data frame received for 5\nI0715 13:48:04.301739 1902 log.go:172] (0xc00037e320) (5) Data frame handling\nI0715 13:48:04.301771 1902 log.go:172] (0xc00037e320) (5) Data frame sent\nI0715 13:48:04.301787 1902 log.go:172] (0xc0001246e0) Data frame received for 5\nI0715 13:48:04.301796 1902 log.go:172] (0xc00037e320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0715 13:48:04.303142 1902 log.go:172] (0xc0001246e0) Data frame received for 1\nI0715 13:48:04.303168 1902 log.go:172] (0xc00031a6e0) (1) Data frame handling\nI0715 13:48:04.303204 1902 log.go:172] (0xc00031a6e0) (1) Data frame sent\nI0715 13:48:04.303241 1902 log.go:172] (0xc0001246e0) (0xc00031a6e0) Stream removed, broadcasting: 1\nI0715 13:48:04.303275 1902 log.go:172] (0xc0001246e0) Go away received\nI0715 13:48:04.304228 1902 log.go:172] (0xc0001246e0) (0xc00031a6e0) Stream removed, broadcasting: 1\nI0715 13:48:04.304275 1902 log.go:172] (0xc0001246e0) (0xc00031a780) Stream removed, broadcasting: 3\nI0715 13:48:04.304342 1902 log.go:172] (0xc0001246e0) (0xc00037e320) Stream removed, broadcasting: 5\n" Jul 15 13:48:04.309: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 13:48:04.309: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 13:48:24.328: INFO: Waiting for StatefulSet statefulset-8547/ss2 to complete update STEP: Rolling back to a previous revision Jul 15 13:48:34.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8547 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 15 13:48:34.617: INFO: stderr: "I0715 13:48:34.463602 1923 log.go:172] (0xc000aa6420) (0xc0005826e0) Create stream\nI0715 13:48:34.463673 1923 log.go:172] (0xc000aa6420) (0xc0005826e0) Stream added, broadcasting: 1\nI0715 13:48:34.467175 1923 log.go:172] (0xc000aa6420) Reply frame received for 1\nI0715 13:48:34.467221 1923 log.go:172] (0xc000aa6420) (0xc000582000) Create stream\nI0715 13:48:34.467239 1923 log.go:172] (0xc000aa6420) (0xc000582000) Stream added, broadcasting: 3\nI0715 13:48:34.468200 1923 log.go:172] (0xc000aa6420) Reply frame received for 3\nI0715 13:48:34.468250 1923 log.go:172] (0xc000aa6420) (0xc0005d4140) Create stream\nI0715 13:48:34.468264 1923 log.go:172] (0xc000aa6420) (0xc0005d4140) Stream added, broadcasting: 5\nI0715 13:48:34.469315 1923 log.go:172] (0xc000aa6420) Reply frame received for 5\nI0715 13:48:34.576146 1923 log.go:172] (0xc000aa6420) Data frame received for 5\nI0715 13:48:34.576168 1923 log.go:172] (0xc0005d4140) (5) Data frame handling\nI0715 13:48:34.576180 1923 log.go:172] (0xc0005d4140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0715 13:48:34.609623 1923 log.go:172] (0xc000aa6420) Data frame received for 3\nI0715 13:48:34.609646 1923 log.go:172] (0xc000582000) (3) Data frame handling\nI0715 13:48:34.609667 1923 log.go:172] (0xc000582000) (3) Data frame sent\nI0715 13:48:34.609978 1923 log.go:172] (0xc000aa6420) Data frame received for 5\nI0715 13:48:34.609993 1923 log.go:172] (0xc0005d4140) (5) Data frame handling\nI0715 13:48:34.610015 1923 log.go:172] (0xc000aa6420) Data frame received for 3\nI0715 13:48:34.610027 1923 log.go:172] (0xc000582000) (3) Data frame handling\nI0715 13:48:34.611745 1923 log.go:172] (0xc000aa6420) Data frame received for 1\nI0715 13:48:34.611855 1923 log.go:172] (0xc0005826e0) (1) Data frame handling\nI0715 13:48:34.611895 1923 log.go:172] (0xc0005826e0) (1) Data frame sent\nI0715 13:48:34.611913 1923 log.go:172] (0xc000aa6420) (0xc0005826e0) Stream removed, broadcasting: 1\nI0715 13:48:34.611931 1923 log.go:172] (0xc000aa6420) Go away received\nI0715 13:48:34.612287 1923 log.go:172] (0xc000aa6420) (0xc0005826e0) Stream removed, broadcasting: 1\nI0715 13:48:34.612311 1923 log.go:172] (0xc000aa6420) (0xc000582000) Stream removed, broadcasting: 3\nI0715 13:48:34.612326 1923 log.go:172] (0xc000aa6420) (0xc0005d4140) Stream removed, broadcasting: 5\n" Jul 15 13:48:34.617: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 15 13:48:34.617: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 15 13:48:44.650: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 15 13:48:54.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8547 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 15 13:48:54.906: INFO: stderr: "I0715 13:48:54.840785 1943 log.go:172] (0xc0008ba420) (0xc0003146e0) Create stream\nI0715 13:48:54.840847 1943 log.go:172] (0xc0008ba420) (0xc0003146e0) Stream added, broadcasting: 1\nI0715 13:48:54.842673 1943 log.go:172] (0xc0008ba420) Reply frame received for 1\nI0715 13:48:54.842711 1943 log.go:172] (0xc0008ba420) (0xc0007aa000) Create stream\nI0715 13:48:54.842723 1943 log.go:172] (0xc0008ba420) (0xc0007aa000) Stream added, broadcasting: 3\nI0715 13:48:54.843550 1943 log.go:172] (0xc0008ba420) Reply frame received for 3\nI0715 13:48:54.843587 1943 log.go:172] (0xc0008ba420) (0xc000314780) Create stream\nI0715 13:48:54.843596 1943 log.go:172] (0xc0008ba420) (0xc000314780) Stream added, broadcasting: 5\nI0715 13:48:54.844390 1943 log.go:172] (0xc0008ba420) Reply frame received for 5\nI0715 13:48:54.900229 1943 log.go:172] (0xc0008ba420) Data frame received for 5\nI0715 13:48:54.900266 1943 log.go:172] (0xc000314780) (5) Data frame handling\nI0715 13:48:54.900279 1943 log.go:172] (0xc000314780) (5) Data frame sent\nI0715 13:48:54.900288 1943 log.go:172] (0xc0008ba420) Data frame received for 5\nI0715 13:48:54.900296 1943 log.go:172] (0xc000314780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0715 13:48:54.900320 1943 log.go:172] (0xc0008ba420) Data frame received for 3\nI0715 13:48:54.900327 1943 log.go:172] (0xc0007aa000) (3) Data frame handling\nI0715 13:48:54.900336 1943 log.go:172] (0xc0007aa000) (3) Data frame sent\nI0715 13:48:54.900344 1943 log.go:172] (0xc0008ba420) Data frame received for 3\nI0715 13:48:54.900351 1943 log.go:172] (0xc0007aa000) (3) Data frame handling\nI0715 13:48:54.901751 1943 log.go:172] (0xc0008ba420) Data frame received for 1\nI0715 13:48:54.901776 1943 log.go:172] (0xc0003146e0) (1) Data frame handling\nI0715 13:48:54.901784 1943 log.go:172] (0xc0003146e0) (1) Data frame sent\nI0715 13:48:54.901794 1943 log.go:172] (0xc0008ba420) (0xc0003146e0) Stream removed, broadcasting: 1\nI0715 13:48:54.901829 1943 log.go:172] (0xc0008ba420) Go away received\nI0715 13:48:54.902087 1943 log.go:172] (0xc0008ba420) (0xc0003146e0) Stream removed, broadcasting: 1\nI0715 13:48:54.902100 1943 log.go:172] (0xc0008ba420) (0xc0007aa000) Stream removed, broadcasting: 3\nI0715 13:48:54.902108 1943 log.go:172] (0xc0008ba420) (0xc000314780) Stream removed, broadcasting: 5\n" Jul 15 13:48:54.906: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 15 13:48:54.906: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 15 13:49:05.385: INFO: Waiting for StatefulSet statefulset-8547/ss2 to complete update Jul 15 13:49:05.385: INFO: Waiting for Pod statefulset-8547/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 15 13:49:05.385: INFO: Waiting for Pod statefulset-8547/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 15 13:49:05.385: INFO: Waiting for Pod statefulset-8547/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 15 13:49:15.396: INFO: Waiting for StatefulSet statefulset-8547/ss2 to complete update Jul 15 13:49:15.396: INFO: Waiting for Pod statefulset-8547/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 15 13:49:15.396: INFO: Waiting for Pod statefulset-8547/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 15 13:49:25.392: INFO: Waiting for StatefulSet statefulset-8547/ss2 to complete update Jul 15 13:49:25.392: INFO: Waiting for Pod statefulset-8547/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 15 13:49:35.393: INFO: Deleting all statefulset in ns statefulset-8547 Jul 15 13:49:35.396: INFO: Scaling statefulset ss2 to 0 Jul 15 13:50:05.452: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 13:50:05.454: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:50:05.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8547" for this suite. Jul 15 13:50:11.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:50:11.619: INFO: namespace statefulset-8547 deletion completed in 6.114766824s • [SLOW TEST:168.036 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:50:11.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2266 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2266 to expose endpoints map[] Jul 15 13:50:11.724: INFO: Get endpoints failed (3.316203ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 15 13:50:12.727: INFO: successfully validated that service multi-endpoint-test in namespace services-2266 exposes endpoints map[] (1.006232488s elapsed) STEP: Creating pod pod1 in namespace services-2266 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2266 to expose endpoints map[pod1:[100]] Jul 15 13:50:16.800: INFO: successfully validated that service multi-endpoint-test in namespace services-2266 exposes endpoints map[pod1:[100]] (4.068675039s elapsed) STEP: Creating pod pod2 in namespace services-2266 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2266 to expose endpoints map[pod1:[100] pod2:[101]] Jul 15 13:50:19.888: INFO: successfully validated that service multi-endpoint-test in namespace services-2266 exposes endpoints map[pod1:[100] pod2:[101]] (3.084693188s elapsed) STEP: Deleting pod pod1 in namespace services-2266 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2266 to expose endpoints map[pod2:[101]] Jul 15 13:50:20.954: INFO: successfully validated that service multi-endpoint-test in namespace services-2266 exposes endpoints map[pod2:[101]] (1.061021085s elapsed) STEP: Deleting pod pod2 in namespace services-2266 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2266 to expose endpoints map[] Jul 15 13:50:21.970: INFO: successfully validated that service multi-endpoint-test in namespace services-2266 exposes endpoints map[] (1.010848137s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:50:22.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2266" for this suite. Jul 15 13:50:28.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:50:28.192: INFO: namespace services-2266 deletion completed in 6.128982437s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.571 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:50:28.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-2265/secret-test-78262d81-c1e1-4760-b619-7562caa432f4 STEP: Creating a pod to test consume secrets Jul 15 13:50:28.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3" in namespace "secrets-2265" to be "success or failure" Jul 15 13:50:28.259: INFO: Pod "pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873112ms Jul 15 13:50:30.308: INFO: Pod "pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052510442s Jul 15 13:50:32.321: INFO: Pod "pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065441632s STEP: Saw pod success Jul 15 13:50:32.321: INFO: Pod "pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3" satisfied condition "success or failure" Jul 15 13:50:32.323: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3 container env-test: STEP: delete the pod Jul 15 13:50:32.352: INFO: Waiting for pod pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3 to disappear Jul 15 13:50:32.379: INFO: Pod pod-configmaps-4c1d0971-e77f-45dc-bc87-a3ea15dc22c3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:50:32.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2265" for this suite. Jul 15 13:50:38.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:50:38.477: INFO: namespace secrets-2265 deletion completed in 6.094989441s • [SLOW TEST:10.286 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:50:38.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 15 13:50:38.590: INFO: Waiting up to 5m0s for pod "downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1" in namespace "downward-api-3668" to be "success or failure" Jul 15 13:50:38.602: INFO: Pod "downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.966724ms Jul 15 13:50:40.606: INFO: Pod "downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015895299s Jul 15 13:50:42.610: INFO: Pod "downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020157739s STEP: Saw pod success Jul 15 13:50:42.610: INFO: Pod "downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1" satisfied condition "success or failure" Jul 15 13:50:42.613: INFO: Trying to get logs from node iruya-worker pod downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1 container dapi-container: STEP: delete the pod Jul 15 13:50:42.698: INFO: Waiting for pod downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1 to disappear Jul 15 13:50:42.704: INFO: Pod downward-api-4e8c6764-b956-4110-b31e-41f094b77dc1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:50:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3668" for this suite. Jul 15 13:50:48.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:50:48.806: INFO: namespace downward-api-3668 deletion completed in 6.098849686s • [SLOW TEST:10.328 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:50:48.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 13:50:48.878: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 15 13:50:48.903: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 15 13:50:53.908: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 15 13:50:53.908: INFO: Creating deployment "test-rolling-update-deployment" Jul 15 13:50:53.913: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 15 13:50:53.939: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 15 13:50:55.946: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 15 13:50:55.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417853, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417853, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417854, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417853, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 13:50:57.953: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 15 13:50:57.961: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5153,SelfLink:/apis/apps/v1/namespaces/deployment-5153/deployments/test-rolling-update-deployment,UID:bbcdf75b-0a96-4e08-ac60-3d73d0da6c35,ResourceVersion:1031095,Generation:1,CreationTimestamp:2020-07-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-15 13:50:53 +0000 UTC 2020-07-15 13:50:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-15 13:50:56 +0000 UTC 2020-07-15 13:50:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 15 13:50:57.964: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5153,SelfLink:/apis/apps/v1/namespaces/deployment-5153/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:0f92d163-3859-4d15-95dd-7dc2244386b7,ResourceVersion:1031083,Generation:1,CreationTimestamp:2020-07-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bbcdf75b-0a96-4e08-ac60-3d73d0da6c35 0xc002afb077 0xc002afb078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 15 13:50:57.964: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 15 13:50:57.964: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5153,SelfLink:/apis/apps/v1/namespaces/deployment-5153/replicasets/test-rolling-update-controller,UID:463b0df2-8128-465c-8bfc-d2af9223cf2f,ResourceVersion:1031093,Generation:2,CreationTimestamp:2020-07-15 13:50:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bbcdf75b-0a96-4e08-ac60-3d73d0da6c35 0xc002afafa7 0xc002afafa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 15 13:50:57.967: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-6dfjq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-6dfjq,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5153,SelfLink:/api/v1/namespaces/deployment-5153/pods/test-rolling-update-deployment-79f6b9d75c-6dfjq,UID:8b4a6acf-aa0b-4e6c-8f84-9290d694ada0,ResourceVersion:1031082,Generation:0,CreationTimestamp:2020-07-15 13:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 0f92d163-3859-4d15-95dd-7dc2244386b7 0xc002afb947 0xc002afb948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cw89z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cw89z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-cw89z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002afb9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002afb9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:50:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:50:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:50:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 13:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.203,StartTime:2020-07-15 13:50:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-15 13:50:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://c038cf616d216bc926c01f7cb425fbe0bd9783a1620a926bda1393c55f709584}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:50:57.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5153" for this suite. Jul 15 13:51:04.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:51:04.093: INFO: namespace deployment-5153 deletion completed in 6.12271359s • [SLOW TEST:15.287 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:51:04.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:51:08.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4644" for this suite. Jul 15 13:51:58.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:51:58.406: INFO: namespace kubelet-test-4644 deletion completed in 50.088599891s • [SLOW TEST:54.313 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:51:58.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 15 13:51:58.457: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jul 15 13:51:58.874: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 15 13:52:01.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 13:52:03.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730417918, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 13:52:05.885: INFO: Waited 626.131726ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:52:06.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6025" for this suite. Jul 15 13:52:12.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:52:12.583: INFO: namespace aggregator-6025 deletion completed in 6.252392154s • [SLOW TEST:14.177 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:52:12.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 15 13:52:12.650: INFO: Waiting up to 5m0s for pod "downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27" in namespace "downward-api-8932" to be "success or failure" Jul 15 13:52:12.665: INFO: Pod "downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27": Phase="Pending", Reason="", readiness=false. Elapsed: 15.625374ms Jul 15 13:52:14.669: INFO: Pod "downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019614102s Jul 15 13:52:16.673: INFO: Pod "downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023687028s STEP: Saw pod success Jul 15 13:52:16.673: INFO: Pod "downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27" satisfied condition "success or failure" Jul 15 13:52:16.676: INFO: Trying to get logs from node iruya-worker pod downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27 container dapi-container: STEP: delete the pod Jul 15 13:52:16.702: INFO: Waiting for pod downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27 to disappear Jul 15 13:52:16.725: INFO: Pod downward-api-dbb4d798-f6a7-492e-a4fa-017710ca6e27 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:52:16.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8932" for this suite. Jul 15 13:52:22.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:52:22.933: INFO: namespace downward-api-8932 deletion completed in 6.204948699s • [SLOW TEST:10.350 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:52:22.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jul 15 13:52:22.982: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:52:23.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2792" for this suite. Jul 15 13:52:29.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:52:29.168: INFO: namespace kubectl-2792 deletion completed in 6.10726922s • [SLOW TEST:6.234 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:52:29.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-ccc31074-bd37-4b1c-917f-a5cf9e55e0eb STEP: Creating a pod to test consume secrets Jul 15 13:52:29.538: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282" in namespace "projected-9631" to be "success or failure" Jul 15 13:52:29.597: INFO: Pod "pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282": Phase="Pending", Reason="", readiness=false. Elapsed: 58.292071ms Jul 15 13:52:31.601: INFO: Pod "pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062560817s Jul 15 13:52:33.605: INFO: Pod "pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066667265s STEP: Saw pod success Jul 15 13:52:33.605: INFO: Pod "pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282" satisfied condition "success or failure" Jul 15 13:52:33.608: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282 container projected-secret-volume-test: STEP: delete the pod Jul 15 13:52:33.641: INFO: Waiting for pod pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282 to disappear Jul 15 13:52:33.657: INFO: Pod pod-projected-secrets-a61db377-0a8d-431a-bb6a-05dbbb4b2282 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:52:33.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9631" for this suite. Jul 15 13:52:39.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:52:39.740: INFO: namespace projected-9631 deletion completed in 6.080473567s • [SLOW TEST:10.572 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:52:39.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ca36699b-667a-4a96-be2b-d9eb42aed6cf STEP: Creating a pod to test consume secrets Jul 15 13:52:39.827: INFO: Waiting up to 5m0s for pod "pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c" in namespace "secrets-5207" to be "success or failure" Jul 15 13:52:39.831: INFO: Pod "pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414192ms Jul 15 13:52:41.835: INFO: Pod "pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008054424s Jul 15 13:52:43.839: INFO: Pod "pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012066929s STEP: Saw pod success Jul 15 13:52:43.839: INFO: Pod "pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c" satisfied condition "success or failure" Jul 15 13:52:43.841: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c container secret-volume-test: STEP: delete the pod Jul 15 13:52:43.874: INFO: Waiting for pod pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c to disappear Jul 15 13:52:43.887: INFO: Pod pod-secrets-6a8baa2e-790a-47b0-8d62-8a58412a1d4c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:52:43.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5207" for this suite. Jul 15 13:52:49.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:52:50.063: INFO: namespace secrets-5207 deletion completed in 6.172350825s • [SLOW TEST:10.321 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:52:50.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jul 15 13:52:50.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1958' Jul 15 13:52:52.881: INFO: stderr: "" Jul 15 13:52:52.881: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 15 13:52:52.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1958' Jul 15 13:52:53.005: INFO: stderr: "" Jul 15 13:52:53.005: INFO: stdout: "update-demo-nautilus-7wwc9 update-demo-nautilus-zn4x7 " Jul 15 13:52:53.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wwc9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:52:53.104: INFO: stderr: "" Jul 15 13:52:53.104: INFO: stdout: "" Jul 15 13:52:53.104: INFO: update-demo-nautilus-7wwc9 is created but not running Jul 15 13:52:58.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1958' Jul 15 13:52:58.195: INFO: stderr: "" Jul 15 13:52:58.195: INFO: stdout: "update-demo-nautilus-7wwc9 update-demo-nautilus-zn4x7 " Jul 15 13:52:58.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wwc9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:52:58.298: INFO: stderr: "" Jul 15 13:52:58.298: INFO: stdout: "true" Jul 15 13:52:58.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wwc9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:52:58.401: INFO: stderr: "" Jul 15 13:52:58.401: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:52:58.401: INFO: validating pod update-demo-nautilus-7wwc9 Jul 15 13:52:58.404: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:52:58.404: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:52:58.404: INFO: update-demo-nautilus-7wwc9 is verified up and running Jul 15 13:52:58.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zn4x7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:52:58.493: INFO: stderr: "" Jul 15 13:52:58.493: INFO: stdout: "true" Jul 15 13:52:58.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zn4x7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:52:58.584: INFO: stderr: "" Jul 15 13:52:58.584: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:52:58.584: INFO: validating pod update-demo-nautilus-zn4x7 Jul 15 13:52:58.587: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:52:58.587: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:52:58.587: INFO: update-demo-nautilus-zn4x7 is verified up and running STEP: rolling-update to new replication controller Jul 15 13:52:58.589: INFO: scanned /root for discovery docs: Jul 15 13:52:58.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1958' Jul 15 13:53:21.145: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 15 13:53:21.145: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 15 13:53:21.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1958' Jul 15 13:53:21.253: INFO: stderr: "" Jul 15 13:53:21.253: INFO: stdout: "update-demo-kitten-5x9mq update-demo-kitten-ql4vl " Jul 15 13:53:21.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5x9mq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:53:21.341: INFO: stderr: "" Jul 15 13:53:21.341: INFO: stdout: "true" Jul 15 13:53:21.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5x9mq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:53:21.423: INFO: stderr: "" Jul 15 13:53:21.423: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 15 13:53:21.423: INFO: validating pod update-demo-kitten-5x9mq Jul 15 13:53:21.437: INFO: got data: { "image": "kitten.jpg" } Jul 15 13:53:21.437: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 15 13:53:21.437: INFO: update-demo-kitten-5x9mq is verified up and running Jul 15 13:53:21.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ql4vl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:53:21.528: INFO: stderr: "" Jul 15 13:53:21.528: INFO: stdout: "true" Jul 15 13:53:21.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ql4vl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1958' Jul 15 13:53:21.626: INFO: stderr: "" Jul 15 13:53:21.626: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jul 15 13:53:21.626: INFO: validating pod update-demo-kitten-ql4vl Jul 15 13:53:21.630: INFO: got data: { "image": "kitten.jpg" } Jul 15 13:53:21.630: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jul 15 13:53:21.630: INFO: update-demo-kitten-ql4vl is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:53:21.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1958" for this suite. Jul 15 13:53:43.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:53:43.722: INFO: namespace kubectl-1958 deletion completed in 22.088664933s • [SLOW TEST:53.659 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:53:43.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c5409575-dea4-4e24-b4a4-581fd9f66853 STEP: Creating a pod to test consume secrets Jul 15 13:53:43.848: INFO: Waiting up to 5m0s for pod "pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc" in namespace "secrets-802" to be "success or failure" Jul 15 13:53:43.880: INFO: Pod "pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 31.217886ms Jul 15 13:53:45.883: INFO: Pod "pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034684484s Jul 15 13:53:47.886: INFO: Pod "pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037430361s Jul 15 13:53:49.898: INFO: Pod "pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04921922s STEP: Saw pod success Jul 15 13:53:49.898: INFO: Pod "pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc" satisfied condition "success or failure" Jul 15 13:53:49.900: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc container secret-volume-test: STEP: delete the pod Jul 15 13:53:49.994: INFO: Waiting for pod pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc to disappear Jul 15 13:53:50.029: INFO: Pod pod-secrets-f69185a9-d796-4283-88d3-ddc8cbb35bdc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:53:50.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-802" for this suite. Jul 15 13:53:56.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:53:56.122: INFO: namespace secrets-802 deletion completed in 6.088685848s • [SLOW TEST:12.400 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:53:56.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7700.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7700.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 13:54:02.275: INFO: DNS probes using dns-7700/dns-test-0f60766a-4a36-462b-a151-fc7c9b6825af succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:54:02.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7700" for this suite. Jul 15 13:54:08.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:54:08.420: INFO: namespace dns-7700 deletion completed in 6.092775651s • [SLOW TEST:12.298 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:54:08.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-6bba7f20-4d54-485d-903b-7ab36a46e81e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6bba7f20-4d54-485d-903b-7ab36a46e81e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:55:36.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3644" for this suite. Jul 15 13:55:59.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:55:59.089: INFO: namespace configmap-3644 deletion completed in 22.147240115s • [SLOW TEST:110.668 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:55:59.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 15 13:56:03.211: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:56:03.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9798" for this suite. Jul 15 13:56:09.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:56:09.509: INFO: namespace container-runtime-9798 deletion completed in 6.163774886s • [SLOW TEST:10.419 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:56:09.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jul 15 13:56:09.572: INFO: Waiting up to 5m0s for pod "client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c" in namespace "containers-571" to be "success or failure" Jul 15 13:56:09.576: INFO: Pod "client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.881285ms Jul 15 13:56:11.580: INFO: Pod "client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007356097s Jul 15 13:56:13.584: INFO: Pod "client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011486608s STEP: Saw pod success Jul 15 13:56:13.584: INFO: Pod "client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c" satisfied condition "success or failure" Jul 15 13:56:13.587: INFO: Trying to get logs from node iruya-worker2 pod client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c container test-container: STEP: delete the pod Jul 15 13:56:13.608: INFO: Waiting for pod client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c to disappear Jul 15 13:56:13.735: INFO: Pod client-containers-405bcb0f-5649-4b16-8153-7e8f0c590c9c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:56:13.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-571" for this suite. Jul 15 13:56:19.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:56:19.843: INFO: namespace containers-571 deletion completed in 6.10290937s • [SLOW TEST:10.333 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:56:19.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 15 13:56:27.957: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:27.979: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:29.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:29.983: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:31.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:31.984: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:33.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:33.983: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:35.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:35.984: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:37.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:37.984: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:39.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:39.984: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:41.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:41.984: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:43.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:43.984: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:45.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:45.983: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 13:56:47.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 13:56:47.984: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:56:47.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4350" for this suite. Jul 15 13:57:10.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:57:10.087: INFO: namespace container-lifecycle-hook-4350 deletion completed in 22.090235147s • [SLOW TEST:50.244 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:57:10.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:57:10.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222" in namespace "projected-3177" to be "success or failure" Jul 15 13:57:10.170: INFO: Pod "downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222": Phase="Pending", Reason="", readiness=false. Elapsed: 9.813129ms Jul 15 13:57:12.174: INFO: Pod "downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013287062s Jul 15 13:57:14.178: INFO: Pod "downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017696962s STEP: Saw pod success Jul 15 13:57:14.178: INFO: Pod "downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222" satisfied condition "success or failure" Jul 15 13:57:14.181: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222 container client-container: STEP: delete the pod Jul 15 13:57:14.202: INFO: Waiting for pod downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222 to disappear Jul 15 13:57:14.206: INFO: Pod downwardapi-volume-e2d5b6bf-4495-4680-917b-672a0dad9222 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:57:14.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3177" for this suite. Jul 15 13:57:20.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:57:20.319: INFO: namespace projected-3177 deletion completed in 6.108693565s • [SLOW TEST:10.230 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:57:20.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:57:28.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9996" for this suite. Jul 15 13:57:34.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:57:34.500: INFO: namespace kubelet-test-9996 deletion completed in 6.085413271s • [SLOW TEST:14.181 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:57:34.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jul 15 13:57:34.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-483' Jul 15 13:57:34.802: INFO: stderr: "" Jul 15 13:57:34.802: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 15 13:57:34.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-483' Jul 15 13:57:34.950: INFO: stderr: "" Jul 15 13:57:34.950: INFO: stdout: "update-demo-nautilus-25tmr update-demo-nautilus-ns6bt " Jul 15 13:57:34.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25tmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-483' Jul 15 13:57:35.056: INFO: stderr: "" Jul 15 13:57:35.056: INFO: stdout: "" Jul 15 13:57:35.056: INFO: update-demo-nautilus-25tmr is created but not running Jul 15 13:57:40.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-483' Jul 15 13:57:40.148: INFO: stderr: "" Jul 15 13:57:40.148: INFO: stdout: "update-demo-nautilus-25tmr update-demo-nautilus-ns6bt " Jul 15 13:57:40.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25tmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-483' Jul 15 13:57:40.232: INFO: stderr: "" Jul 15 13:57:40.232: INFO: stdout: "true" Jul 15 13:57:40.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25tmr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-483' Jul 15 13:57:40.322: INFO: stderr: "" Jul 15 13:57:40.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:57:40.322: INFO: validating pod update-demo-nautilus-25tmr Jul 15 13:57:40.326: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:57:40.326: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:57:40.326: INFO: update-demo-nautilus-25tmr is verified up and running Jul 15 13:57:40.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ns6bt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-483' Jul 15 13:57:40.404: INFO: stderr: "" Jul 15 13:57:40.404: INFO: stdout: "true" Jul 15 13:57:40.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ns6bt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-483' Jul 15 13:57:40.497: INFO: stderr: "" Jul 15 13:57:40.497: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 13:57:40.497: INFO: validating pod update-demo-nautilus-ns6bt Jul 15 13:57:40.501: INFO: got data: { "image": "nautilus.jpg" } Jul 15 13:57:40.501: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 13:57:40.501: INFO: update-demo-nautilus-ns6bt is verified up and running STEP: using delete to clean up resources Jul 15 13:57:40.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-483' Jul 15 13:57:40.593: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 13:57:40.593: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 15 13:57:40.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-483' Jul 15 13:57:41.013: INFO: stderr: "No resources found.\n" Jul 15 13:57:41.013: INFO: stdout: "" Jul 15 13:57:41.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-483 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 15 13:57:41.106: INFO: stderr: "" Jul 15 13:57:41.106: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:57:41.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-483" for this suite. Jul 15 13:58:03.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:58:03.573: INFO: namespace kubectl-483 deletion completed in 22.319918179s • [SLOW TEST:29.073 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:58:03.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jul 15 13:58:08.286: INFO: Successfully updated pod "annotationupdate866e7016-ba49-4706-83d8-08f84a6a4122" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:58:12.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7114" for this suite. Jul 15 13:58:34.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:58:34.435: INFO: namespace projected-7114 deletion completed in 22.104839616s • [SLOW TEST:30.862 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:58:34.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 13:58:34.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9" in namespace "downward-api-1749" to be "success or failure" Jul 15 13:58:34.545: INFO: Pod "downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9": Phase="Pending", Reason="", readiness=false. Elapsed: 53.860534ms Jul 15 13:58:36.549: INFO: Pod "downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058058206s Jul 15 13:58:38.554: INFO: Pod "downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063227315s STEP: Saw pod success Jul 15 13:58:38.554: INFO: Pod "downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9" satisfied condition "success or failure" Jul 15 13:58:38.557: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9 container client-container: STEP: delete the pod Jul 15 13:58:38.592: INFO: Waiting for pod downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9 to disappear Jul 15 13:58:38.595: INFO: Pod downwardapi-volume-935bec65-b2ae-47d9-9605-40781ccd90c9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:58:38.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1749" for this suite. Jul 15 13:58:44.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:58:44.690: INFO: namespace downward-api-1749 deletion completed in 6.091687553s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:58:44.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 15 13:58:44.748: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9042,SelfLink:/api/v1/namespaces/watch-9042/configmaps/e2e-watch-test-watch-closed,UID:9659c740-5a10-425c-9aae-ec3bbfd42849,ResourceVersion:1032758,Generation:0,CreationTimestamp:2020-07-15 13:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 15 13:58:44.748: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9042,SelfLink:/api/v1/namespaces/watch-9042/configmaps/e2e-watch-test-watch-closed,UID:9659c740-5a10-425c-9aae-ec3bbfd42849,ResourceVersion:1032759,Generation:0,CreationTimestamp:2020-07-15 13:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 15 13:58:44.776: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9042,SelfLink:/api/v1/namespaces/watch-9042/configmaps/e2e-watch-test-watch-closed,UID:9659c740-5a10-425c-9aae-ec3bbfd42849,ResourceVersion:1032760,Generation:0,CreationTimestamp:2020-07-15 13:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 15 13:58:44.776: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9042,SelfLink:/api/v1/namespaces/watch-9042/configmaps/e2e-watch-test-watch-closed,UID:9659c740-5a10-425c-9aae-ec3bbfd42849,ResourceVersion:1032761,Generation:0,CreationTimestamp:2020-07-15 13:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:58:44.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9042" for this suite. Jul 15 13:58:50.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:58:50.888: INFO: namespace watch-9042 deletion completed in 6.085302876s • [SLOW TEST:6.198 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:58:50.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jul 15 13:58:50.978: INFO: namespace kubectl-547 Jul 15 13:58:50.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-547' Jul 15 13:58:51.306: INFO: stderr: "" Jul 15 13:58:51.306: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 15 13:58:52.310: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:58:52.310: INFO: Found 0 / 1 Jul 15 13:58:53.311: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:58:53.311: INFO: Found 0 / 1 Jul 15 13:58:54.335: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:58:54.335: INFO: Found 1 / 1 Jul 15 13:58:54.335: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 15 13:58:54.339: INFO: Selector matched 1 pods for map[app:redis] Jul 15 13:58:54.339: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 15 13:58:54.339: INFO: wait on redis-master startup in kubectl-547 Jul 15 13:58:54.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d462n redis-master --namespace=kubectl-547' Jul 15 13:58:54.469: INFO: stderr: "" Jul 15 13:58:54.469: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Jul 13:58:54.039 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Jul 13:58:54.039 # Server started, Redis version 3.2.12\n1:M 15 Jul 13:58:54.039 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Jul 13:58:54.039 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jul 15 13:58:54.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-547' Jul 15 13:58:54.594: INFO: stderr: "" Jul 15 13:58:54.594: INFO: stdout: "service/rm2 exposed\n" Jul 15 13:58:54.603: INFO: Service rm2 in namespace kubectl-547 found. STEP: exposing service Jul 15 13:58:56.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-547' Jul 15 13:58:56.760: INFO: stderr: "" Jul 15 13:58:56.760: INFO: stdout: "service/rm3 exposed\n" Jul 15 13:58:56.765: INFO: Service rm3 in namespace kubectl-547 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:58:58.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-547" for this suite. Jul 15 13:59:22.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:59:22.859: INFO: namespace kubectl-547 deletion completed in 24.083125249s • [SLOW TEST:31.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:59:22.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 15 13:59:26.022: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 13:59:26.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1040" for this suite. Jul 15 13:59:32.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 13:59:32.295: INFO: namespace container-runtime-1040 deletion completed in 6.107633949s • [SLOW TEST:9.436 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 13:59:32.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 in namespace container-probe-288 Jul 15 13:59:36.413: INFO: Started pod liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 in namespace container-probe-288 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 13:59:36.416: INFO: Initial restart count of pod liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 is 0 Jul 15 13:59:54.507: INFO: Restart count of pod container-probe-288/liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 is now 1 (18.090854605s elapsed) Jul 15 14:00:14.551: INFO: Restart count of pod container-probe-288/liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 is now 2 (38.135230192s elapsed) Jul 15 14:00:34.639: INFO: Restart count of pod container-probe-288/liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 is now 3 (58.223246938s elapsed) Jul 15 14:00:54.696: INFO: Restart count of pod container-probe-288/liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 is now 4 (1m18.280371312s elapsed) Jul 15 14:01:56.977: INFO: Restart count of pod container-probe-288/liveness-7fac0723-7bbf-4a3c-81bd-dab5138e56e5 is now 5 (2m20.561404175s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:01:56.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-288" for this suite. Jul 15 14:02:03.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:02:03.113: INFO: namespace container-probe-288 deletion completed in 6.115908037s • [SLOW TEST:150.818 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:02:03.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1a3cf8fa-445e-4bee-ab60-b0ccbde6660d STEP: Creating a pod to test consume secrets Jul 15 14:02:03.226: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258" in namespace "projected-9209" to be "success or failure" Jul 15 14:02:03.230: INFO: Pod "pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258": Phase="Pending", Reason="", readiness=false. Elapsed: 3.556171ms Jul 15 14:02:05.241: INFO: Pod "pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015043377s Jul 15 14:02:07.245: INFO: Pod "pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018832373s STEP: Saw pod success Jul 15 14:02:07.245: INFO: Pod "pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258" satisfied condition "success or failure" Jul 15 14:02:07.248: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258 container projected-secret-volume-test: STEP: delete the pod Jul 15 14:02:07.278: INFO: Waiting for pod pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258 to disappear Jul 15 14:02:07.284: INFO: Pod pod-projected-secrets-05c8ba95-d071-4c6a-84f2-d5f8ae7e9258 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:02:07.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9209" for this suite. Jul 15 14:02:13.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:02:13.425: INFO: namespace projected-9209 deletion completed in 6.138302002s • [SLOW TEST:10.312 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:02:13.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-4d533d94-f9d9-4b90-ad9e-d38c1361dff7 STEP: Creating a pod to test consume configMaps Jul 15 14:02:13.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364" in namespace "configmap-6690" to be "success or failure" Jul 15 14:02:13.506: INFO: Pod "pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364": Phase="Pending", Reason="", readiness=false. Elapsed: 9.666172ms Jul 15 14:02:15.601: INFO: Pod "pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104573661s Jul 15 14:02:17.604: INFO: Pod "pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10809391s STEP: Saw pod success Jul 15 14:02:17.604: INFO: Pod "pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364" satisfied condition "success or failure" Jul 15 14:02:17.607: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364 container configmap-volume-test: STEP: delete the pod Jul 15 14:02:17.640: INFO: Waiting for pod pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364 to disappear Jul 15 14:02:17.654: INFO: Pod pod-configmaps-14c25576-fe3c-4341-9d4d-7a216b06a364 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:02:17.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6690" for this suite. Jul 15 14:02:23.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:02:23.791: INFO: namespace configmap-6690 deletion completed in 6.133344243s • [SLOW TEST:10.366 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:02:23.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jul 15 14:02:23.872: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 15 14:02:23.916: INFO: Waiting for terminating namespaces to be deleted... Jul 15 14:02:23.919: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jul 15 14:02:23.924: INFO: live-test7-5dd99f9b45-jtpmp from default started at 2020-07-10 11:54:47 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.924: INFO: Container live-test7 ready: false, restart count 1408 Jul 15 14:02:23.924: INFO: kindnet-452tn from kube-system started at 2020-07-10 10:24:50 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.924: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 14:02:23.924: INFO: live-test4-74f5c7c95f-l2676 from default started at 2020-07-10 11:02:03 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.924: INFO: Container live-test4 ready: false, restart count 1424 Jul 15 14:02:23.924: INFO: kube-proxy-2pg5m from kube-system started at 2020-07-10 10:24:49 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.924: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 14:02:23.924: INFO: dnsutils from default started at 2020-07-10 11:15:11 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.924: INFO: Container dnsutils ready: true, restart count 122 Jul 15 14:02:23.924: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jul 15 14:02:23.931: INFO: kindnet-qpkmc from kube-system started at 2020-07-10 10:24:50 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 14:02:23.931: INFO: live-test3-6556bf7d77-2k9dg from default started at 2020-07-10 11:00:05 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container live-test3 ready: false, restart count 1421 Jul 15 14:02:23.931: INFO: live-test6-988dbb567-rqc7x from default started at 2020-07-10 11:22:41 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container live-test6 ready: false, restart count 1422 Jul 15 14:02:23.931: INFO: live-test1-677ffc8869-nvdk5 from default started at 2020-07-10 10:49:37 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container live-test1 ready: false, restart count 1424 Jul 15 14:02:23.931: INFO: live-test5-b6fcb7757-w869x from default started at 2020-07-10 11:06:28 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container live-test5 ready: false, restart count 1421 Jul 15 14:02:23.931: INFO: kube-proxy-bf52l from kube-system started at 2020-07-10 10:24:49 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 14:02:23.931: INFO: live-test2-54d9dcd87-bsdvc from default started at 2020-07-10 10:58:02 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container live-test2 ready: false, restart count 1425 Jul 15 14:02:23.931: INFO: live-test8-55669b464c-bfdv5 from default started at 2020-07-10 11:56:07 +0000 UTC (1 container statuses recorded) Jul 15 14:02:23.931: INFO: Container live-test8 ready: false, restart count 1411 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5bee97a8-9260-41ad-8cf8-e0c70cf1edd3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5bee97a8-9260-41ad-8cf8-e0c70cf1edd3 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5bee97a8-9260-41ad-8cf8-e0c70cf1edd3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:02:32.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5557" for this suite. Jul 15 14:02:50.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:02:50.194: INFO: namespace sched-pred-5557 deletion completed in 18.089915124s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.403 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:02:50.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3472/configmap-test-41569d3b-2a80-45c4-a7ae-99d74a8d17cf STEP: Creating a pod to test consume configMaps Jul 15 14:02:50.252: INFO: Waiting up to 5m0s for pod "pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929" in namespace "configmap-3472" to be "success or failure" Jul 15 14:02:50.255: INFO: Pod "pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616447ms Jul 15 14:02:52.260: INFO: Pod "pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00806646s Jul 15 14:02:54.264: INFO: Pod "pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012352285s STEP: Saw pod success Jul 15 14:02:54.264: INFO: Pod "pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929" satisfied condition "success or failure" Jul 15 14:02:54.267: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929 container env-test: STEP: delete the pod Jul 15 14:02:54.291: INFO: Waiting for pod pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929 to disappear Jul 15 14:02:54.385: INFO: Pod pod-configmaps-72fdc5a0-bafd-4236-8b13-918d296d0929 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:02:54.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3472" for this suite. Jul 15 14:03:00.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:03:00.481: INFO: namespace configmap-3472 deletion completed in 6.091910771s • [SLOW TEST:10.286 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:03:00.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 15 14:03:00.579: INFO: Waiting up to 5m0s for pod "downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b" in namespace "downward-api-3308" to be "success or failure" Jul 15 14:03:00.589: INFO: Pod "downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.240521ms Jul 15 14:03:02.593: INFO: Pod "downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014201432s Jul 15 14:03:04.598: INFO: Pod "downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01861558s STEP: Saw pod success Jul 15 14:03:04.598: INFO: Pod "downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b" satisfied condition "success or failure" Jul 15 14:03:04.601: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b container dapi-container: STEP: delete the pod Jul 15 14:03:04.741: INFO: Waiting for pod downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b to disappear Jul 15 14:03:04.751: INFO: Pod downward-api-b59a8de0-f49c-4e90-aef2-35254556f74b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:03:04.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3308" for this suite. Jul 15 14:03:10.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:03:10.860: INFO: namespace downward-api-3308 deletion completed in 6.105314648s • [SLOW TEST:10.379 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:03:10.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jul 15 14:03:10.941: INFO: Waiting up to 5m0s for pod "var-expansion-93b03caf-7b44-4072-91ae-af86116c045f" in namespace "var-expansion-8610" to be "success or failure" Jul 15 14:03:10.944: INFO: Pod "var-expansion-93b03caf-7b44-4072-91ae-af86116c045f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665479ms Jul 15 14:03:12.948: INFO: Pod "var-expansion-93b03caf-7b44-4072-91ae-af86116c045f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006484386s Jul 15 14:03:14.952: INFO: Pod "var-expansion-93b03caf-7b44-4072-91ae-af86116c045f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01041835s STEP: Saw pod success Jul 15 14:03:14.952: INFO: Pod "var-expansion-93b03caf-7b44-4072-91ae-af86116c045f" satisfied condition "success or failure" Jul 15 14:03:14.955: INFO: Trying to get logs from node iruya-worker pod var-expansion-93b03caf-7b44-4072-91ae-af86116c045f container dapi-container: STEP: delete the pod Jul 15 14:03:15.016: INFO: Waiting for pod var-expansion-93b03caf-7b44-4072-91ae-af86116c045f to disappear Jul 15 14:03:15.022: INFO: Pod var-expansion-93b03caf-7b44-4072-91ae-af86116c045f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:03:15.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8610" for this suite. Jul 15 14:03:21.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:03:21.112: INFO: namespace var-expansion-8610 deletion completed in 6.085954613s • [SLOW TEST:10.250 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:03:21.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 15 14:03:21.842: INFO: Pod name wrapped-volume-race-33343138-3bd4-4d29-8967-00c2463421d7: Found 0 pods out of 5 Jul 15 14:03:26.851: INFO: Pod name wrapped-volume-race-33343138-3bd4-4d29-8967-00c2463421d7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-33343138-3bd4-4d29-8967-00c2463421d7 in namespace emptydir-wrapper-9200, will wait for the garbage collector to delete the pods Jul 15 14:03:42.935: INFO: Deleting ReplicationController wrapped-volume-race-33343138-3bd4-4d29-8967-00c2463421d7 took: 9.126624ms Jul 15 14:03:43.235: INFO: Terminating ReplicationController wrapped-volume-race-33343138-3bd4-4d29-8967-00c2463421d7 pods took: 300.283123ms STEP: Creating RC which spawns configmap-volume pods Jul 15 14:04:27.371: INFO: Pod name wrapped-volume-race-9212117d-f82e-4bf7-9cc6-d25ffefe7404: Found 0 pods out of 5 Jul 15 14:04:32.380: INFO: Pod name wrapped-volume-race-9212117d-f82e-4bf7-9cc6-d25ffefe7404: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9212117d-f82e-4bf7-9cc6-d25ffefe7404 in namespace emptydir-wrapper-9200, will wait for the garbage collector to delete the pods Jul 15 14:04:48.468: INFO: Deleting ReplicationController wrapped-volume-race-9212117d-f82e-4bf7-9cc6-d25ffefe7404 took: 6.382975ms Jul 15 14:04:48.768: INFO: Terminating ReplicationController wrapped-volume-race-9212117d-f82e-4bf7-9cc6-d25ffefe7404 pods took: 300.323575ms STEP: Creating RC which spawns configmap-volume pods Jul 15 14:05:28.094: INFO: Pod name wrapped-volume-race-4070c52c-a6d3-4620-8534-b8d28db8758e: Found 0 pods out of 5 Jul 15 14:05:33.120: INFO: Pod name wrapped-volume-race-4070c52c-a6d3-4620-8534-b8d28db8758e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4070c52c-a6d3-4620-8534-b8d28db8758e in namespace emptydir-wrapper-9200, will wait for the garbage collector to delete the pods Jul 15 14:05:47.206: INFO: Deleting ReplicationController wrapped-volume-race-4070c52c-a6d3-4620-8534-b8d28db8758e took: 7.010147ms Jul 15 14:05:47.506: INFO: Terminating ReplicationController wrapped-volume-race-4070c52c-a6d3-4620-8534-b8d28db8758e pods took: 300.261337ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:06:28.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9200" for this suite. Jul 15 14:06:36.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:06:36.185: INFO: namespace emptydir-wrapper-9200 deletion completed in 8.097566852s • [SLOW TEST:195.073 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:06:36.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9d6b932a-3837-4702-90ca-09bbb4f7a237 STEP: Creating a pod to test consume configMaps Jul 15 14:06:36.309: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c" in namespace "projected-8966" to be "success or failure" Jul 15 14:06:36.350: INFO: Pod "pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.469134ms Jul 15 14:06:38.354: INFO: Pod "pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045441333s Jul 15 14:06:40.359: INFO: Pod "pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049944721s STEP: Saw pod success Jul 15 14:06:40.359: INFO: Pod "pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c" satisfied condition "success or failure" Jul 15 14:06:40.362: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c container projected-configmap-volume-test: STEP: delete the pod Jul 15 14:06:40.398: INFO: Waiting for pod pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c to disappear Jul 15 14:06:40.405: INFO: Pod pod-projected-configmaps-6b6fa937-283c-48b2-bcb9-655970e0f40c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:06:40.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8966" for this suite. Jul 15 14:06:46.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:06:46.501: INFO: namespace projected-8966 deletion completed in 6.093375058s • [SLOW TEST:10.316 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:06:46.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 15 14:06:46.639: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 15 14:06:51.643: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:06:52.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5558" for this suite. Jul 15 14:06:58.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:06:58.848: INFO: namespace replication-controller-5558 deletion completed in 6.183560317s • [SLOW TEST:12.347 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:06:58.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jul 15 14:06:58.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-904 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jul 15 14:07:04.972: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0715 14:07:04.905343 2572 log.go:172] (0xc0009142c0) (0xc000ac8a00) Create stream\nI0715 14:07:04.905388 2572 log.go:172] (0xc0009142c0) (0xc000ac8a00) Stream added, broadcasting: 1\nI0715 14:07:04.907710 2572 log.go:172] (0xc0009142c0) Reply frame received for 1\nI0715 14:07:04.907770 2572 log.go:172] (0xc0009142c0) (0xc000808000) Create stream\nI0715 14:07:04.907792 2572 log.go:172] (0xc0009142c0) (0xc000808000) Stream added, broadcasting: 3\nI0715 14:07:04.908829 2572 log.go:172] (0xc0009142c0) Reply frame received for 3\nI0715 14:07:04.908876 2572 log.go:172] (0xc0009142c0) (0xc0008080a0) Create stream\nI0715 14:07:04.908888 2572 log.go:172] (0xc0009142c0) (0xc0008080a0) Stream added, broadcasting: 5\nI0715 14:07:04.909923 2572 log.go:172] (0xc0009142c0) Reply frame received for 5\nI0715 14:07:04.909956 2572 log.go:172] (0xc0009142c0) (0xc000ac8aa0) Create stream\nI0715 14:07:04.909969 2572 log.go:172] (0xc0009142c0) (0xc000ac8aa0) Stream added, broadcasting: 7\nI0715 14:07:04.910839 2572 log.go:172] (0xc0009142c0) Reply frame received for 7\nI0715 14:07:04.910994 2572 log.go:172] (0xc000808000) (3) Writing data frame\nI0715 14:07:04.911157 2572 log.go:172] (0xc000808000) (3) Writing data frame\nI0715 14:07:04.912082 2572 log.go:172] (0xc0009142c0) Data frame received for 5\nI0715 14:07:04.912108 2572 log.go:172] (0xc0008080a0) (5) Data frame handling\nI0715 14:07:04.912131 2572 log.go:172] (0xc0008080a0) (5) Data frame sent\nI0715 14:07:04.912963 2572 log.go:172] (0xc0009142c0) Data frame received for 5\nI0715 14:07:04.912984 2572 log.go:172] (0xc0008080a0) (5) Data frame handling\nI0715 14:07:04.912995 2572 log.go:172] (0xc0008080a0) (5) Data frame sent\nI0715 14:07:04.948717 2572 log.go:172] (0xc0009142c0) Data frame received for 5\nI0715 14:07:04.948850 2572 log.go:172] (0xc0009142c0) Data frame received for 7\nI0715 14:07:04.948887 2572 log.go:172] (0xc000ac8aa0) (7) Data frame handling\nI0715 14:07:04.948951 2572 log.go:172] (0xc0008080a0) (5) Data frame handling\nI0715 14:07:04.949544 2572 log.go:172] (0xc0009142c0) Data frame received for 1\nI0715 14:07:04.949591 2572 log.go:172] (0xc000ac8a00) (1) Data frame handling\nI0715 14:07:04.949621 2572 log.go:172] (0xc000ac8a00) (1) Data frame sent\nI0715 14:07:04.949662 2572 log.go:172] (0xc0009142c0) (0xc000808000) Stream removed, broadcasting: 3\nI0715 14:07:04.949720 2572 log.go:172] (0xc0009142c0) (0xc000ac8a00) Stream removed, broadcasting: 1\nI0715 14:07:04.949759 2572 log.go:172] (0xc0009142c0) Go away received\nI0715 14:07:04.949861 2572 log.go:172] (0xc0009142c0) (0xc000ac8a00) Stream removed, broadcasting: 1\nI0715 14:07:04.949884 2572 log.go:172] (0xc0009142c0) (0xc000808000) Stream removed, broadcasting: 3\nI0715 14:07:04.949896 2572 log.go:172] (0xc0009142c0) (0xc0008080a0) Stream removed, broadcasting: 5\nI0715 14:07:04.949912 2572 log.go:172] (0xc0009142c0) (0xc000ac8aa0) Stream removed, broadcasting: 7\n" Jul 15 14:07:04.972: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:07:06.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-904" for this suite. Jul 15 14:07:12.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:07:13.079: INFO: namespace kubectl-904 deletion completed in 6.09708258s • [SLOW TEST:14.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:07:13.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-938d91bd-0ca2-40db-9cac-736fb73eb0e7 STEP: Creating a pod to test consume configMaps Jul 15 14:07:13.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9" in namespace "configmap-8158" to be "success or failure" Jul 15 14:07:13.204: INFO: Pod "pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.095059ms Jul 15 14:07:15.208: INFO: Pod "pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021533094s Jul 15 14:07:17.211: INFO: Pod "pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024968856s STEP: Saw pod success Jul 15 14:07:17.211: INFO: Pod "pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9" satisfied condition "success or failure" Jul 15 14:07:17.213: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9 container configmap-volume-test: STEP: delete the pod Jul 15 14:07:17.252: INFO: Waiting for pod pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9 to disappear Jul 15 14:07:17.411: INFO: Pod pod-configmaps-ae97dc43-f261-42a5-8fca-cb0105be57c9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:07:17.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8158" for this suite. Jul 15 14:07:23.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:07:23.545: INFO: namespace configmap-8158 deletion completed in 6.130152791s • [SLOW TEST:10.465 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:07:23.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 15 14:07:31.692: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 15 14:07:31.697: INFO: Pod pod-with-prestop-http-hook still exists Jul 15 14:07:33.697: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 15 14:07:33.711: INFO: Pod pod-with-prestop-http-hook still exists Jul 15 14:07:35.697: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 15 14:07:35.701: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:07:35.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3559" for this suite. Jul 15 14:07:57.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:07:57.828: INFO: namespace container-lifecycle-hook-3559 deletion completed in 22.114533456s • [SLOW TEST:34.283 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:07:57.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-410e4dfe-96e3-4812-83f3-7d57ee62f658 in namespace container-probe-8927 Jul 15 14:08:01.938: INFO: Started pod busybox-410e4dfe-96e3-4812-83f3-7d57ee62f658 in namespace container-probe-8927 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 14:08:01.941: INFO: Initial restart count of pod busybox-410e4dfe-96e3-4812-83f3-7d57ee62f658 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:12:02.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8927" for this suite. Jul 15 14:12:08.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:12:09.005: INFO: namespace container-probe-8927 deletion completed in 6.111666471s • [SLOW TEST:251.175 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:12:09.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 15 14:12:09.112: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035831,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 15 14:12:09.112: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035831,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 15 14:12:19.120: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035852,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 15 14:12:19.120: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035852,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 15 14:12:29.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035874,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 15 14:12:29.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035874,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 15 14:12:39.135: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035894,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 15 14:12:39.135: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-a,UID:aa851f53-0395-4ebb-b181-182aebc1d48d,ResourceVersion:1035894,Generation:0,CreationTimestamp:2020-07-15 14:12:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 15 14:12:49.142: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-b,UID:3a40217a-0bf6-4240-98df-7ada1ca98e87,ResourceVersion:1035921,Generation:0,CreationTimestamp:2020-07-15 14:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 15 14:12:49.142: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-b,UID:3a40217a-0bf6-4240-98df-7ada1ca98e87,ResourceVersion:1035921,Generation:0,CreationTimestamp:2020-07-15 14:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 15 14:12:59.149: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-b,UID:3a40217a-0bf6-4240-98df-7ada1ca98e87,ResourceVersion:1035941,Generation:0,CreationTimestamp:2020-07-15 14:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 15 14:12:59.149: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5565,SelfLink:/api/v1/namespaces/watch-5565/configmaps/e2e-watch-test-configmap-b,UID:3a40217a-0bf6-4240-98df-7ada1ca98e87,ResourceVersion:1035941,Generation:0,CreationTimestamp:2020-07-15 14:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:13:09.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5565" for this suite. Jul 15 14:13:15.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:13:15.286: INFO: namespace watch-5565 deletion completed in 6.131442662s • [SLOW TEST:66.280 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:13:15.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-8b26a205-d916-4e35-afd6-540f0de561e6 STEP: Creating a pod to test consume configMaps Jul 15 14:13:15.431: INFO: Waiting up to 5m0s for pod "pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea" in namespace "configmap-5748" to be "success or failure" Jul 15 14:13:15.449: INFO: Pod "pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea": Phase="Pending", Reason="", readiness=false. Elapsed: 17.749447ms Jul 15 14:13:17.453: INFO: Pod "pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022215212s Jul 15 14:13:19.458: INFO: Pod "pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026489778s STEP: Saw pod success Jul 15 14:13:19.458: INFO: Pod "pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea" satisfied condition "success or failure" Jul 15 14:13:19.461: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea container configmap-volume-test: STEP: delete the pod Jul 15 14:13:19.493: INFO: Waiting for pod pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea to disappear Jul 15 14:13:19.507: INFO: Pod pod-configmaps-d10afe31-3a86-4407-904d-26b41a4aecea no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:13:19.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5748" for this suite. Jul 15 14:13:25.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:13:25.600: INFO: namespace configmap-5748 deletion completed in 6.089240753s • [SLOW TEST:10.313 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:13:25.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jul 15 14:13:30.220: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7933 pod-service-account-1b74245c-6457-49ce-b3d6-0aa392329d00 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 15 14:13:30.427: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7933 pod-service-account-1b74245c-6457-49ce-b3d6-0aa392329d00 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 15 14:13:30.618: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7933 pod-service-account-1b74245c-6457-49ce-b3d6-0aa392329d00 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:13:30.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7933" for this suite. Jul 15 14:13:36.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:13:36.891: INFO: namespace svcaccounts-7933 deletion completed in 6.090709172s • [SLOW TEST:11.290 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:13:36.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 15 14:13:36.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3" in namespace "downward-api-9068" to be "success or failure" Jul 15 14:13:37.011: INFO: Pod "downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.697263ms Jul 15 14:13:39.029: INFO: Pod "downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030782471s Jul 15 14:13:41.034: INFO: Pod "downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035959957s Jul 15 14:13:43.037: INFO: Pod "downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039215963s STEP: Saw pod success Jul 15 14:13:43.037: INFO: Pod "downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3" satisfied condition "success or failure" Jul 15 14:13:43.039: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3 container client-container: STEP: delete the pod Jul 15 14:13:43.105: INFO: Waiting for pod downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3 to disappear Jul 15 14:13:43.117: INFO: Pod downwardapi-volume-bcc1fdf2-1f79-47a0-b20c-4bde8c6b76a3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:13:43.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9068" for this suite. Jul 15 14:13:49.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:13:49.205: INFO: namespace downward-api-9068 deletion completed in 6.084574907s • [SLOW TEST:12.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:13:49.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 14:13:49.333: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 15 14:13:54.338: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 15 14:13:54.338: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 15 14:13:54.365: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8381,SelfLink:/apis/apps/v1/namespaces/deployment-8381/deployments/test-cleanup-deployment,UID:90a5df44-0f82-4058-948c-2584268a010f,ResourceVersion:1036159,Generation:1,CreationTimestamp:2020-07-15 14:13:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jul 15 14:13:54.370: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8381,SelfLink:/apis/apps/v1/namespaces/deployment-8381/replicasets/test-cleanup-deployment-55bbcbc84c,UID:e5b9354e-7e1b-455d-8ef4-56ab25609686,ResourceVersion:1036161,Generation:1,CreationTimestamp:2020-07-15 14:13:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 90a5df44-0f82-4058-948c-2584268a010f 0xc0016a2317 0xc0016a2318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 15 14:13:54.370: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 15 14:13:54.370: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8381,SelfLink:/apis/apps/v1/namespaces/deployment-8381/replicasets/test-cleanup-controller,UID:d38ae459-50e9-4775-aaa7-7bcd69567e13,ResourceVersion:1036160,Generation:1,CreationTimestamp:2020-07-15 14:13:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 90a5df44-0f82-4058-948c-2584268a010f 0xc0016a2187 0xc0016a2188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 15 14:13:54.391: INFO: Pod "test-cleanup-controller-clqhz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-clqhz,GenerateName:test-cleanup-controller-,Namespace:deployment-8381,SelfLink:/api/v1/namespaces/deployment-8381/pods/test-cleanup-controller-clqhz,UID:b52ee94b-f1a9-4677-b104-e574b44fb1f8,ResourceVersion:1036152,Generation:0,CreationTimestamp:2020-07-15 14:13:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller d38ae459-50e9-4775-aaa7-7bcd69567e13 0xc0016a31e7 0xc0016a31e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5g98v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5g98v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5g98v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016a3260} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016a3280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:13:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:13:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:13:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:13:49 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.133,StartTime:2020-07-15 14:13:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:13:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c36906413096b697ec2703d8a22e0fbdb4efa3eeefd0069dd6ca8b17424db2fe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 15 14:13:54.392: INFO: Pod "test-cleanup-deployment-55bbcbc84c-k5n49" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-k5n49,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8381,SelfLink:/api/v1/namespaces/deployment-8381/pods/test-cleanup-deployment-55bbcbc84c-k5n49,UID:70d3e655-7888-46f3-bddb-64bc31e43cb4,ResourceVersion:1036162,Generation:0,CreationTimestamp:2020-07-15 14:13:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c e5b9354e-7e1b-455d-8ef4-56ab25609686 0xc0016a3367 0xc0016a3368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5g98v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5g98v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-5g98v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016a33d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016a33f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:13:54.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8381" for this suite. Jul 15 14:14:00.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:14:00.657: INFO: namespace deployment-8381 deletion completed in 6.176201618s • [SLOW TEST:11.452 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:14:00.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:14:26.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6717" for this suite. Jul 15 14:14:32.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:14:32.995: INFO: namespace namespaces-6717 deletion completed in 6.114909583s STEP: Destroying namespace "nsdeletetest-930" for this suite. Jul 15 14:14:32.997: INFO: Namespace nsdeletetest-930 was already deleted STEP: Destroying namespace "nsdeletetest-8637" for this suite. Jul 15 14:14:39.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:14:39.133: INFO: namespace nsdeletetest-8637 deletion completed in 6.13607078s • [SLOW TEST:38.475 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:14:39.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 15 14:14:47.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 15 14:14:47.281: INFO: Pod pod-with-poststart-http-hook still exists Jul 15 14:14:49.281: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 15 14:14:49.285: INFO: Pod pod-with-poststart-http-hook still exists Jul 15 14:14:51.281: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 15 14:14:51.285: INFO: Pod pod-with-poststart-http-hook still exists Jul 15 14:14:53.281: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 15 14:14:53.284: INFO: Pod pod-with-poststart-http-hook still exists Jul 15 14:14:55.281: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 15 14:14:55.284: INFO: Pod pod-with-poststart-http-hook still exists Jul 15 14:14:57.281: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 15 14:14:57.284: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:14:57.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2431" for this suite. Jul 15 14:15:19.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:15:19.374: INFO: namespace container-lifecycle-hook-2431 deletion completed in 22.085559207s • [SLOW TEST:40.241 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:15:19.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 14:15:39.468: INFO: Container started at 2020-07-15 14:15:21 +0000 UTC, pod became ready at 2020-07-15 14:15:37 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 15 14:15:39.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2874" for this suite. Jul 15 14:16:01.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 15 14:16:01.586: INFO: namespace container-probe-2874 deletion completed in 22.113784878s • [SLOW TEST:42.211 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 15 14:16:01.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 15 14:16:01.643: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6e7134c8-e541-4644-bd98-b08fbbf43a53
STEP: Creating a pod to test consume configMaps
Jul 15 14:16:07.997: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995" in namespace "projected-7847" to be "success or failure"
Jul 15 14:16:08.007: INFO: Pod "pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995": Phase="Pending", Reason="", readiness=false. Elapsed: 10.724624ms
Jul 15 14:16:10.011: INFO: Pod "pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014711028s
Jul 15 14:16:12.019: INFO: Pod "pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021928885s
STEP: Saw pod success
Jul 15 14:16:12.019: INFO: Pod "pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995" satisfied condition "success or failure"
Jul 15 14:16:12.021: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 15 14:16:12.049: INFO: Waiting for pod pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995 to disappear
Jul 15 14:16:12.060: INFO: Pod pod-projected-configmaps-4c84670a-ce50-4b44-82f5-d0eb02827995 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:16:12.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7847" for this suite.
Jul 15 14:16:20.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:16:20.180: INFO: namespace projected-7847 deletion completed in 8.116070827s

• [SLOW TEST:12.371 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:16:20.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 15 14:16:20.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8927'
Jul 15 14:16:20.342: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 15 14:16:20.342: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jul 15 14:16:22.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8927'
Jul 15 14:16:22.558: INFO: stderr: ""
Jul 15 14:16:22.558: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:16:22.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8927" for this suite.
Jul 15 14:16:44.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:16:44.650: INFO: namespace kubectl-8927 deletion completed in 22.088153304s

• [SLOW TEST:24.469 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:16:44.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 15 14:16:44.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb" in namespace "downward-api-8560" to be "success or failure"
Jul 15 14:16:44.723: INFO: Pod "downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.720595ms
Jul 15 14:16:46.726: INFO: Pod "downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008701739s
Jul 15 14:16:48.729: INFO: Pod "downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012230943s
STEP: Saw pod success
Jul 15 14:16:48.729: INFO: Pod "downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb" satisfied condition "success or failure"
Jul 15 14:16:48.731: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb container client-container: 
STEP: delete the pod
Jul 15 14:16:48.751: INFO: Waiting for pod downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb to disappear
Jul 15 14:16:48.762: INFO: Pod downwardapi-volume-796ed20d-c8b4-408c-b76b-76a55272e8bb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:16:48.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8560" for this suite.
Jul 15 14:16:54.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:16:54.906: INFO: namespace downward-api-8560 deletion completed in 6.140070088s

• [SLOW TEST:10.255 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:16:54.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-d6956d4a-a40a-4bfb-82ec-e0b8989a8c5c
STEP: Creating a pod to test consume secrets
Jul 15 14:16:54.968: INFO: Waiting up to 5m0s for pod "pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82" in namespace "secrets-5201" to be "success or failure"
Jul 15 14:16:54.971: INFO: Pod "pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448894ms
Jul 15 14:16:56.976: INFO: Pod "pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007975618s
Jul 15 14:16:58.981: INFO: Pod "pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012633693s
STEP: Saw pod success
Jul 15 14:16:58.981: INFO: Pod "pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82" satisfied condition "success or failure"
Jul 15 14:16:58.984: INFO: Trying to get logs from node iruya-worker pod pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82 container secret-volume-test: 
STEP: delete the pod
Jul 15 14:16:59.003: INFO: Waiting for pod pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82 to disappear
Jul 15 14:16:59.007: INFO: Pod pod-secrets-676296a6-e325-4e83-97ec-8e162fcf6f82 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:16:59.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5201" for this suite.
Jul 15 14:17:05.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:17:05.101: INFO: namespace secrets-5201 deletion completed in 6.088735959s

• [SLOW TEST:10.195 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:17:05.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-069c5aa8-6bd0-4f52-a7a3-ee9eb37cdf91
STEP: Creating a pod to test consume secrets
Jul 15 14:17:05.182: INFO: Waiting up to 5m0s for pod "pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998" in namespace "secrets-3704" to be "success or failure"
Jul 15 14:17:05.212: INFO: Pod "pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998": Phase="Pending", Reason="", readiness=false. Elapsed: 29.11781ms
Jul 15 14:17:07.215: INFO: Pod "pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032696401s
Jul 15 14:17:09.219: INFO: Pod "pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036563847s
STEP: Saw pod success
Jul 15 14:17:09.219: INFO: Pod "pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998" satisfied condition "success or failure"
Jul 15 14:17:09.222: INFO: Trying to get logs from node iruya-worker pod pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998 container secret-env-test: 
STEP: delete the pod
Jul 15 14:17:09.243: INFO: Waiting for pod pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998 to disappear
Jul 15 14:17:09.247: INFO: Pod pod-secrets-93ce4e27-d117-4812-8e96-05d8cd506998 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:17:09.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3704" for this suite.
Jul 15 14:17:15.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:17:15.339: INFO: namespace secrets-3704 deletion completed in 6.088618942s

• [SLOW TEST:10.238 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:17:15.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jul 15 14:17:15.400: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2544" to be "success or failure"
Jul 15 14:17:15.410: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.968915ms
Jul 15 14:17:17.414: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014371494s
Jul 15 14:17:19.418: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018296094s
Jul 15 14:17:21.422: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022380509s
STEP: Saw pod success
Jul 15 14:17:21.422: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul 15 14:17:21.425: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul 15 14:17:21.473: INFO: Waiting for pod pod-host-path-test to disappear
Jul 15 14:17:21.476: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:17:21.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2544" for this suite.
Jul 15 14:17:27.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:17:27.570: INFO: namespace hostpath-2544 deletion completed in 6.090574642s

• [SLOW TEST:12.231 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:17:27.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jul 15 14:17:31.735: INFO: Pod pod-hostip-606c7878-0bf3-4a83-8c8e-3e5197043212 has hostIP: 172.18.0.9
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:17:31.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8831" for this suite.
Jul 15 14:17:53.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:17:53.831: INFO: namespace pods-8831 deletion completed in 22.09201202s

• [SLOW TEST:26.260 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:17:53.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-8784, will wait for the garbage collector to delete the pods
Jul 15 14:17:59.965: INFO: Deleting Job.batch foo took: 6.483676ms
Jul 15 14:18:00.266: INFO: Terminating Job.batch foo pods took: 300.277305ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:18:36.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8784" for this suite.
Jul 15 14:18:43.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:18:43.091: INFO: namespace job-8784 deletion completed in 6.107268678s

• [SLOW TEST:49.260 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:18:43.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jul 15 14:18:43.727: INFO: created pod pod-service-account-defaultsa
Jul 15 14:18:43.727: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul 15 14:18:43.734: INFO: created pod pod-service-account-mountsa
Jul 15 14:18:43.734: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul 15 14:18:43.794: INFO: created pod pod-service-account-nomountsa
Jul 15 14:18:43.794: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul 15 14:18:43.802: INFO: created pod pod-service-account-defaultsa-mountspec
Jul 15 14:18:43.802: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul 15 14:18:43.834: INFO: created pod pod-service-account-mountsa-mountspec
Jul 15 14:18:43.834: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul 15 14:18:43.869: INFO: created pod pod-service-account-nomountsa-mountspec
Jul 15 14:18:43.869: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul 15 14:18:43.927: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul 15 14:18:43.927: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul 15 14:18:43.979: INFO: created pod pod-service-account-mountsa-nomountspec
Jul 15 14:18:43.979: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul 15 14:18:44.012: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul 15 14:18:44.012: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:18:44.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9071" for this suite.
Jul 15 14:19:16.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:19:16.291: INFO: namespace svcaccounts-9071 deletion completed in 32.2091507s

• [SLOW TEST:33.199 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:19:16.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jul 15 14:19:16.369: INFO: Waiting up to 5m0s for pod "client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30" in namespace "containers-1463" to be "success or failure"
Jul 15 14:19:16.400: INFO: Pod "client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30": Phase="Pending", Reason="", readiness=false. Elapsed: 30.521669ms
Jul 15 14:19:18.404: INFO: Pod "client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034435633s
Jul 15 14:19:20.408: INFO: Pod "client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038513904s
STEP: Saw pod success
Jul 15 14:19:20.408: INFO: Pod "client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30" satisfied condition "success or failure"
Jul 15 14:19:20.410: INFO: Trying to get logs from node iruya-worker2 pod client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30 container test-container: 
STEP: delete the pod
Jul 15 14:19:20.426: INFO: Waiting for pod client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30 to disappear
Jul 15 14:19:20.469: INFO: Pod client-containers-cc041a83-937f-44e3-926b-ee2f59b50a30 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:19:20.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1463" for this suite.
Jul 15 14:19:26.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:19:26.557: INFO: namespace containers-1463 deletion completed in 6.084758036s

• [SLOW TEST:10.266 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:19:26.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 15 14:19:26.622: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jul 15 14:19:32.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul 15 14:19:35.328: INFO: stderr: ""
Jul 15 14:19:35.328: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34751\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34751/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:19:35.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5286" for this suite.
Jul 15 14:19:41.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:19:41.426: INFO: namespace kubectl-5286 deletion completed in 6.09366814s

• [SLOW TEST:8.643 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:19:41.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 15 14:19:41.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6643'
Jul 15 14:19:41.596: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 15 14:19:41.596: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jul 15 14:19:41.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6643'
Jul 15 14:19:41.882: INFO: stderr: ""
Jul 15 14:19:41.882: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:19:41.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6643" for this suite.
Jul 15 14:19:47.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:19:48.030: INFO: namespace kubectl-6643 deletion completed in 6.143308776s

• [SLOW TEST:6.603 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:19:48.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3615.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3615.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 15 14:19:54.177: INFO: DNS probes using dns-test-432ac2cd-bfc2-415f-8012-ba7c3faeb52c succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3615.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3615.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 15 14:20:02.295: INFO: File wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:02.299: INFO: File jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:02.299: INFO: Lookups using dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 failed for: [wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local]

Jul 15 14:20:07.406: INFO: File wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:07.409: INFO: File jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:07.409: INFO: Lookups using dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 failed for: [wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local]

Jul 15 14:20:12.307: INFO: File wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:12.310: INFO: File jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:12.310: INFO: Lookups using dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 failed for: [wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local]

Jul 15 14:20:17.303: INFO: File wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:17.306: INFO: File jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:17.306: INFO: Lookups using dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 failed for: [wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local]

Jul 15 14:20:22.304: INFO: File wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:22.308: INFO: File jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local from pod  dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 15 14:20:22.308: INFO: Lookups using dns-3615/dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 failed for: [wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local]

Jul 15 14:20:27.310: INFO: DNS probes using dns-test-079b1cc5-32d3-4a6c-a7cb-53b3857c7870 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3615.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3615.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3615.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3615.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 15 14:20:33.784: INFO: DNS probes using dns-test-b5fdd0a0-b7b9-4181-886b-8b000a668262 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:20:33.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3615" for this suite.
Jul 15 14:20:39.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:20:39.982: INFO: namespace dns-3615 deletion completed in 6.083951132s

• [SLOW TEST:51.952 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:20:39.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0715 14:20:51.517464       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 15 14:20:51.517: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:20:51.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1586" for this suite.
Jul 15 14:21:01.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:21:01.619: INFO: namespace gc-1586 deletion completed in 10.09858377s

• [SLOW TEST:21.636 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:21:01.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8687
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8687
STEP: Creating statefulset with conflicting port in namespace statefulset-8687
STEP: Waiting until pod test-pod will start running in namespace statefulset-8687
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8687
Jul 15 14:21:07.851: INFO: Observed stateful pod in namespace: statefulset-8687, name: ss-0, uid: dbc8dc2f-ee79-4e03-85a4-714e38202c54, status phase: Pending. Waiting for statefulset controller to delete.
Jul 15 14:21:08.031: INFO: Observed stateful pod in namespace: statefulset-8687, name: ss-0, uid: dbc8dc2f-ee79-4e03-85a4-714e38202c54, status phase: Failed. Waiting for statefulset controller to delete.
Jul 15 14:21:08.092: INFO: Observed stateful pod in namespace: statefulset-8687, name: ss-0, uid: dbc8dc2f-ee79-4e03-85a4-714e38202c54, status phase: Failed. Waiting for statefulset controller to delete.
Jul 15 14:21:08.139: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8687
STEP: Removing pod with conflicting port in namespace statefulset-8687
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8687 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jul 15 14:21:14.251: INFO: Deleting all statefulset in ns statefulset-8687
Jul 15 14:21:14.254: INFO: Scaling statefulset ss to 0
Jul 15 14:21:34.272: INFO: Waiting for statefulset status.replicas updated to 0
Jul 15 14:21:34.275: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:21:34.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8687" for this suite.
Jul 15 14:21:40.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:21:40.414: INFO: namespace statefulset-8687 deletion completed in 6.116275351s

• [SLOW TEST:38.795 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:21:40.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 15 14:21:40.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444" in namespace "projected-9360" to be "success or failure"
Jul 15 14:21:40.498: INFO: Pod "downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444": Phase="Pending", Reason="", readiness=false. Elapsed: 3.159586ms
Jul 15 14:21:42.502: INFO: Pod "downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006951914s
Jul 15 14:21:44.505: INFO: Pod "downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010693869s
STEP: Saw pod success
Jul 15 14:21:44.506: INFO: Pod "downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444" satisfied condition "success or failure"
Jul 15 14:21:44.509: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444 container client-container: 
STEP: delete the pod
Jul 15 14:21:44.557: INFO: Waiting for pod downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444 to disappear
Jul 15 14:21:44.576: INFO: Pod downwardapi-volume-477a0926-876e-4e75-b34e-50a6d18e8444 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:21:44.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9360" for this suite.
Jul 15 14:21:50.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:21:50.673: INFO: namespace projected-9360 deletion completed in 6.093588909s

• [SLOW TEST:10.259 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:21:50.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 15 14:21:50.729: INFO: Creating deployment "test-recreate-deployment"
Jul 15 14:21:50.732: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul 15 14:21:50.790: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul 15 14:21:52.795: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul 15 14:21:52.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730419710, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730419710, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730419710, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730419710, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 15 14:21:54.802: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul 15 14:21:54.809: INFO: Updating deployment test-recreate-deployment
Jul 15 14:21:54.809: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jul 15 14:21:55.312: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5207,SelfLink:/apis/apps/v1/namespaces/deployment-5207/deployments/test-recreate-deployment,UID:5b37f0c7-3c71-4fa2-8879-dfb96584e13f,ResourceVersion:1038271,Generation:2,CreationTimestamp:2020-07-15 14:21:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-15 14:21:55 +0000 UTC 2020-07-15 14:21:55 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-15 14:21:55 +0000 UTC 2020-07-15 14:21:50 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jul 15 14:21:55.545: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5207,SelfLink:/apis/apps/v1/namespaces/deployment-5207/replicasets/test-recreate-deployment-5c8c9cc69d,UID:871b464b-1706-4cb4-948e-0242f7141fc9,ResourceVersion:1038268,Generation:1,CreationTimestamp:2020-07-15 14:21:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5b37f0c7-3c71-4fa2-8879-dfb96584e13f 0xc001c41e57 0xc001c41e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 15 14:21:55.545: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul 15 14:21:55.546: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5207,SelfLink:/apis/apps/v1/namespaces/deployment-5207/replicasets/test-recreate-deployment-6df85df6b9,UID:d345f6b0-e6f1-4730-a7b5-3e71b49e06a1,ResourceVersion:1038258,Generation:2,CreationTimestamp:2020-07-15 14:21:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 5b37f0c7-3c71-4fa2-8879-dfb96584e13f 0xc001c41f47 0xc001c41f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 15 14:21:55.574: INFO: Pod "test-recreate-deployment-5c8c9cc69d-bhzw7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-bhzw7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5207,SelfLink:/api/v1/namespaces/deployment-5207/pods/test-recreate-deployment-5c8c9cc69d-bhzw7,UID:8eff2e63-a320-4a4b-81b2-aa2df6e0b3e8,ResourceVersion:1038272,Generation:0,CreationTimestamp:2020-07-15 14:21:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 871b464b-1706-4cb4-948e-0242f7141fc9 0xc0016a2fd7 0xc0016a2fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v29k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v29k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-v29k2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016a3050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016a3070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:21:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:21:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:21:54 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-15 14:21:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:21:55.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5207" for this suite.
Jul 15 14:22:01.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:22:01.809: INFO: namespace deployment-5207 deletion completed in 6.231253709s

• [SLOW TEST:11.135 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:22:01.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jul 15 14:22:01.881: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix332457905/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:22:01.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8407" for this suite.
Jul 15 14:22:07.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:22:08.038: INFO: namespace kubectl-8407 deletion completed in 6.08760382s

• [SLOW TEST:6.228 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:22:08.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2829
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 15 14:22:08.091: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 15 14:22:30.261: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.247:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2829 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 15 14:22:30.261: INFO: >>> kubeConfig: /root/.kube/config
I0715 14:22:30.298204       6 log.go:172] (0xc00321c790) (0xc002e2d400) Create stream
I0715 14:22:30.298239       6 log.go:172] (0xc00321c790) (0xc002e2d400) Stream added, broadcasting: 1
I0715 14:22:30.300332       6 log.go:172] (0xc00321c790) Reply frame received for 1
I0715 14:22:30.300391       6 log.go:172] (0xc00321c790) (0xc001f8e0a0) Create stream
I0715 14:22:30.300408       6 log.go:172] (0xc00321c790) (0xc001f8e0a0) Stream added, broadcasting: 3
I0715 14:22:30.301571       6 log.go:172] (0xc00321c790) Reply frame received for 3
I0715 14:22:30.301639       6 log.go:172] (0xc00321c790) (0xc001dc2c80) Create stream
I0715 14:22:30.301668       6 log.go:172] (0xc00321c790) (0xc001dc2c80) Stream added, broadcasting: 5
I0715 14:22:30.302785       6 log.go:172] (0xc00321c790) Reply frame received for 5
I0715 14:22:30.367709       6 log.go:172] (0xc00321c790) Data frame received for 3
I0715 14:22:30.367745       6 log.go:172] (0xc001f8e0a0) (3) Data frame handling
I0715 14:22:30.367776       6 log.go:172] (0xc001f8e0a0) (3) Data frame sent
I0715 14:22:30.368132       6 log.go:172] (0xc00321c790) Data frame received for 5
I0715 14:22:30.368174       6 log.go:172] (0xc001dc2c80) (5) Data frame handling
I0715 14:22:30.368207       6 log.go:172] (0xc00321c790) Data frame received for 3
I0715 14:22:30.368233       6 log.go:172] (0xc001f8e0a0) (3) Data frame handling
I0715 14:22:30.370982       6 log.go:172] (0xc00321c790) Data frame received for 1
I0715 14:22:30.371003       6 log.go:172] (0xc002e2d400) (1) Data frame handling
I0715 14:22:30.371018       6 log.go:172] (0xc002e2d400) (1) Data frame sent
I0715 14:22:30.371048       6 log.go:172] (0xc00321c790) (0xc002e2d400) Stream removed, broadcasting: 1
I0715 14:22:30.371085       6 log.go:172] (0xc00321c790) Go away received
I0715 14:22:30.371174       6 log.go:172] (0xc00321c790) (0xc002e2d400) Stream removed, broadcasting: 1
I0715 14:22:30.371188       6 log.go:172] (0xc00321c790) (0xc001f8e0a0) Stream removed, broadcasting: 3
I0715 14:22:30.371200       6 log.go:172] (0xc00321c790) (0xc001dc2c80) Stream removed, broadcasting: 5
Jul 15 14:22:30.371: INFO: Found all expected endpoints: [netserver-0]
Jul 15 14:22:30.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.156:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2829 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 15 14:22:30.374: INFO: >>> kubeConfig: /root/.kube/config
I0715 14:22:30.406891       6 log.go:172] (0xc00321d760) (0xc002e2d7c0) Create stream
I0715 14:22:30.406917       6 log.go:172] (0xc00321d760) (0xc002e2d7c0) Stream added, broadcasting: 1
I0715 14:22:30.408849       6 log.go:172] (0xc00321d760) Reply frame received for 1
I0715 14:22:30.408898       6 log.go:172] (0xc00321d760) (0xc002e2d860) Create stream
I0715 14:22:30.408920       6 log.go:172] (0xc00321d760) (0xc002e2d860) Stream added, broadcasting: 3
I0715 14:22:30.409892       6 log.go:172] (0xc00321d760) Reply frame received for 3
I0715 14:22:30.409925       6 log.go:172] (0xc00321d760) (0xc001f8e280) Create stream
I0715 14:22:30.409937       6 log.go:172] (0xc00321d760) (0xc001f8e280) Stream added, broadcasting: 5
I0715 14:22:30.411072       6 log.go:172] (0xc00321d760) Reply frame received for 5
I0715 14:22:30.479023       6 log.go:172] (0xc00321d760) Data frame received for 5
I0715 14:22:30.479051       6 log.go:172] (0xc001f8e280) (5) Data frame handling
I0715 14:22:30.479081       6 log.go:172] (0xc00321d760) Data frame received for 3
I0715 14:22:30.479125       6 log.go:172] (0xc002e2d860) (3) Data frame handling
I0715 14:22:30.479139       6 log.go:172] (0xc002e2d860) (3) Data frame sent
I0715 14:22:30.479150       6 log.go:172] (0xc00321d760) Data frame received for 3
I0715 14:22:30.479169       6 log.go:172] (0xc002e2d860) (3) Data frame handling
I0715 14:22:30.480513       6 log.go:172] (0xc00321d760) Data frame received for 1
I0715 14:22:30.480536       6 log.go:172] (0xc002e2d7c0) (1) Data frame handling
I0715 14:22:30.480544       6 log.go:172] (0xc002e2d7c0) (1) Data frame sent
I0715 14:22:30.480797       6 log.go:172] (0xc00321d760) (0xc002e2d7c0) Stream removed, broadcasting: 1
I0715 14:22:30.480880       6 log.go:172] (0xc00321d760) (0xc002e2d7c0) Stream removed, broadcasting: 1
I0715 14:22:30.480896       6 log.go:172] (0xc00321d760) (0xc002e2d860) Stream removed, broadcasting: 3
I0715 14:22:30.480908       6 log.go:172] (0xc00321d760) (0xc001f8e280) Stream removed, broadcasting: 5
Jul 15 14:22:30.480: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0715 14:22:30.480958       6 log.go:172] (0xc00321d760) Go away received
Jul 15 14:22:30.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2829" for this suite.
Jul 15 14:22:52.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:22:52.572: INFO: namespace pod-network-test-2829 deletion completed in 22.080001428s

• [SLOW TEST:44.534 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:22:52.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jul 15 14:22:52.674: INFO: Waiting up to 5m0s for pod "client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9" in namespace "containers-1944" to be "success or failure"
Jul 15 14:22:52.692: INFO: Pod "client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.805883ms
Jul 15 14:22:54.707: INFO: Pod "client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032830108s
Jul 15 14:22:56.710: INFO: Pod "client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.036488904s
Jul 15 14:22:58.715: INFO: Pod "client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040778539s
STEP: Saw pod success
Jul 15 14:22:58.715: INFO: Pod "client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9" satisfied condition "success or failure"
Jul 15 14:22:58.718: INFO: Trying to get logs from node iruya-worker pod client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9 container test-container: 
STEP: delete the pod
Jul 15 14:22:58.741: INFO: Waiting for pod client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9 to disappear
Jul 15 14:22:58.745: INFO: Pod client-containers-9aedd2ba-331d-458b-b036-e7a2c32913f9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:22:58.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1944" for this suite.
Jul 15 14:23:04.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:23:04.839: INFO: namespace containers-1944 deletion completed in 6.090964406s

• [SLOW TEST:12.267 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:23:04.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 15 14:23:04.936: INFO: Waiting up to 5m0s for pod "pod-8657b8ba-ab63-4998-8d72-0188ad30e7af" in namespace "emptydir-5507" to be "success or failure"
Jul 15 14:23:04.949: INFO: Pod "pod-8657b8ba-ab63-4998-8d72-0188ad30e7af": Phase="Pending", Reason="", readiness=false. Elapsed: 12.664807ms
Jul 15 14:23:06.953: INFO: Pod "pod-8657b8ba-ab63-4998-8d72-0188ad30e7af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01649136s
Jul 15 14:23:08.970: INFO: Pod "pod-8657b8ba-ab63-4998-8d72-0188ad30e7af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033449231s
STEP: Saw pod success
Jul 15 14:23:08.970: INFO: Pod "pod-8657b8ba-ab63-4998-8d72-0188ad30e7af" satisfied condition "success or failure"
Jul 15 14:23:08.973: INFO: Trying to get logs from node iruya-worker2 pod pod-8657b8ba-ab63-4998-8d72-0188ad30e7af container test-container: 
STEP: delete the pod
Jul 15 14:23:08.989: INFO: Waiting for pod pod-8657b8ba-ab63-4998-8d72-0188ad30e7af to disappear
Jul 15 14:23:09.001: INFO: Pod pod-8657b8ba-ab63-4998-8d72-0188ad30e7af no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:23:09.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5507" for this suite.
Jul 15 14:23:15.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:23:15.089: INFO: namespace emptydir-5507 deletion completed in 6.08557083s

• [SLOW TEST:10.251 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:23:15.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jul 15 14:23:15.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6567'
Jul 15 14:23:15.469: INFO: stderr: ""
Jul 15 14:23:15.469: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jul 15 14:23:16.474: INFO: Selector matched 1 pods for map[app:redis]
Jul 15 14:23:16.474: INFO: Found 0 / 1
Jul 15 14:23:17.474: INFO: Selector matched 1 pods for map[app:redis]
Jul 15 14:23:17.474: INFO: Found 0 / 1
Jul 15 14:23:18.473: INFO: Selector matched 1 pods for map[app:redis]
Jul 15 14:23:18.473: INFO: Found 0 / 1
Jul 15 14:23:19.474: INFO: Selector matched 1 pods for map[app:redis]
Jul 15 14:23:19.474: INFO: Found 1 / 1
Jul 15 14:23:19.474: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 15 14:23:19.477: INFO: Selector matched 1 pods for map[app:redis]
Jul 15 14:23:19.477: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jul 15 14:23:19.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v2qzl redis-master --namespace=kubectl-6567'
Jul 15 14:23:19.580: INFO: stderr: ""
Jul 15 14:23:19.580: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Jul 14:23:18.423 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Jul 14:23:18.423 # Server started, Redis version 3.2.12\n1:M 15 Jul 14:23:18.423 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Jul 14:23:18.423 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jul 15 14:23:19.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v2qzl redis-master --namespace=kubectl-6567 --tail=1'
Jul 15 14:23:19.703: INFO: stderr: ""
Jul 15 14:23:19.703: INFO: stdout: "1:M 15 Jul 14:23:18.423 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jul 15 14:23:19.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v2qzl redis-master --namespace=kubectl-6567 --limit-bytes=1'
Jul 15 14:23:19.811: INFO: stderr: ""
Jul 15 14:23:19.811: INFO: stdout: " "
STEP: exposing timestamps
Jul 15 14:23:19.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v2qzl redis-master --namespace=kubectl-6567 --tail=1 --timestamps'
Jul 15 14:23:19.921: INFO: stderr: ""
Jul 15 14:23:19.921: INFO: stdout: "2020-07-15T14:23:18.42382219Z 1:M 15 Jul 14:23:18.423 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jul 15 14:23:22.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v2qzl redis-master --namespace=kubectl-6567 --since=1s'
Jul 15 14:23:22.524: INFO: stderr: ""
Jul 15 14:23:22.524: INFO: stdout: ""
Jul 15 14:23:22.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v2qzl redis-master --namespace=kubectl-6567 --since=24h'
Jul 15 14:23:22.628: INFO: stderr: ""
Jul 15 14:23:22.628: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 15 Jul 14:23:18.423 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Jul 14:23:18.423 # Server started, Redis version 3.2.12\n1:M 15 Jul 14:23:18.423 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Jul 14:23:18.423 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jul 15 14:23:22.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6567'
Jul 15 14:23:22.720: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 15 14:23:22.720: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jul 15 14:23:22.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-6567'
Jul 15 14:23:22.820: INFO: stderr: "No resources found.\n"
Jul 15 14:23:22.820: INFO: stdout: ""
Jul 15 14:23:22.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-6567 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 15 14:23:22.908: INFO: stderr: ""
Jul 15 14:23:22.908: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:23:22.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6567" for this suite.
Jul 15 14:23:44.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:23:45.025: INFO: namespace kubectl-6567 deletion completed in 22.114475683s

• [SLOW TEST:29.936 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:23:45.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:23:45.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2587" for this suite.
Jul 15 14:23:51.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:23:51.338: INFO: namespace kubelet-test-2587 deletion completed in 6.104158608s

• [SLOW TEST:6.313 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:23:51.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:23:56.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5148" for this suite.
Jul 15 14:24:04.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:24:05.074: INFO: namespace watch-5148 deletion completed in 8.179271156s

• [SLOW TEST:13.735 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:24:05.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 15 14:24:05.140: INFO: Creating deployment "nginx-deployment"
Jul 15 14:24:05.144: INFO: Waiting for observed generation 1
Jul 15 14:24:07.204: INFO: Waiting for all required pods to come up
Jul 15 14:24:07.207: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul 15 14:24:19.215: INFO: Waiting for deployment "nginx-deployment" to complete
Jul 15 14:24:19.221: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul 15 14:24:19.227: INFO: Updating deployment nginx-deployment
Jul 15 14:24:19.227: INFO: Waiting for observed generation 2
Jul 15 14:24:21.241: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul 15 14:24:21.244: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul 15 14:24:21.247: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 15 14:24:21.254: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul 15 14:24:21.254: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul 15 14:24:21.257: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 15 14:24:21.261: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul 15 14:24:21.261: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul 15 14:24:21.267: INFO: Updating deployment nginx-deployment
Jul 15 14:24:21.267: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul 15 14:24:21.447: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul 15 14:24:21.486: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jul 15 14:24:21.828: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3681,SelfLink:/apis/apps/v1/namespaces/deployment-3681/deployments/nginx-deployment,UID:0d4b424b-8082-4c30-a5e8-a0389afb8446,ResourceVersion:1039085,Generation:3,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-07-15 14:24:19 +0000 UTC 2020-07-15 14:24:05 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-07-15 14:24:21 +0000 UTC 2020-07-15 14:24:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul 15 14:24:21.897: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3681,SelfLink:/apis/apps/v1/namespaces/deployment-3681/replicasets/nginx-deployment-55fb7cb77f,UID:ad11f37a-5845-4380-8d42-07d8632bb063,ResourceVersion:1039124,Generation:3,CreationTimestamp:2020-07-15 14:24:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0d4b424b-8082-4c30-a5e8-a0389afb8446 0xc003105427 0xc003105428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 15 14:24:21.898: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul 15 14:24:21.898: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3681,SelfLink:/apis/apps/v1/namespaces/deployment-3681/replicasets/nginx-deployment-7b8c6f4498,UID:d211a836-b693-4b37-9f95-2287b2eaf08c,ResourceVersion:1039120,Generation:3,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0d4b424b-8082-4c30-a5e8-a0389afb8446 0xc0031054f7 0xc0031054f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul 15 14:24:22.049: INFO: Pod "nginx-deployment-55fb7cb77f-5lggp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5lggp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-5lggp,UID:bc729a90-81f8-41f8-b1ee-8c8895202d05,ResourceVersion:1039057,Generation:0,CreationTimestamp:2020-07-15 14:24:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fda987 0xc002fda988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdaa00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdaa20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-07-15 14:24:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.049: INFO: Pod "nginx-deployment-55fb7cb77f-6b9mk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6b9mk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-6b9mk,UID:4ac3bf6b-ce10-4b42-b5ff-eeb09e76c905,ResourceVersion:1039117,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdaaf0 0xc002fdaaf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdab70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdab90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.049: INFO: Pod "nginx-deployment-55fb7cb77f-6qm2t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6qm2t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-6qm2t,UID:a8d96f97-5277-4779-95ae-f663c939958b,ResourceVersion:1039125,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdac17 0xc002fdac18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdac90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdacb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.050: INFO: Pod "nginx-deployment-55fb7cb77f-8d95f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8d95f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-8d95f,UID:b3047b72-4dc8-4b7c-b411-7db560b59bd0,ResourceVersion:1039121,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdad37 0xc002fdad38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdadb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdadd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.050: INFO: Pod "nginx-deployment-55fb7cb77f-8ktqw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8ktqw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-8ktqw,UID:563c6850-44f9-4dc2-9448-ac5a99047502,ResourceVersion:1039098,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdae57 0xc002fdae58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdaed0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdaef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.050: INFO: Pod "nginx-deployment-55fb7cb77f-b94z9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b94z9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-b94z9,UID:7c060e79-c798-4134-b0c4-1e6fb3e0112f,ResourceVersion:1039044,Generation:0,CreationTimestamp:2020-07-15 14:24:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdaf77 0xc002fdaf78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdaff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-07-15 14:24:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.050: INFO: Pod "nginx-deployment-55fb7cb77f-blq99" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-blq99,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-blq99,UID:ef6386b0-dfb1-4e03-8bdc-e318e7b692fc,ResourceVersion:1039060,Generation:0,CreationTimestamp:2020-07-15 14:24:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdb0e0 0xc002fdb0e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdb160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-07-15 14:24:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.050: INFO: Pod "nginx-deployment-55fb7cb77f-csktr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-csktr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-csktr,UID:fc6574a1-bab8-4878-9394-635a32d582b5,ResourceVersion:1039118,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdb250 0xc002fdb251}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdb2d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.050: INFO: Pod "nginx-deployment-55fb7cb77f-d6nbx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d6nbx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-d6nbx,UID:994f86ce-a511-4ab8-a3db-259ed77f7411,ResourceVersion:1039113,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdb377 0xc002fdb378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdb3f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.051: INFO: Pod "nginx-deployment-55fb7cb77f-dltc6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dltc6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-dltc6,UID:627f3cf0-6983-4543-a013-98b067d91568,ResourceVersion:1039102,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdb497 0xc002fdb498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdb510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.051: INFO: Pod "nginx-deployment-55fb7cb77f-m62dq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m62dq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-m62dq,UID:14a7c53f-482f-441b-ab22-2411a65e41e7,ResourceVersion:1039045,Generation:0,CreationTimestamp:2020-07-15 14:24:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdb5b7 0xc002fdb5b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdb630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-15 14:24:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.051: INFO: Pod "nginx-deployment-55fb7cb77f-rrfkq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rrfkq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-rrfkq,UID:04194d3a-217b-4b1f-933f-f4dab427ab4f,ResourceVersion:1039116,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdb720 0xc002fdb721}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdb7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.051: INFO: Pod "nginx-deployment-55fb7cb77f-z72c8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z72c8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-55fb7cb77f-z72c8,UID:c76d2756-aafd-4d76-b76f-721b83163666,ResourceVersion:1039059,Generation:0,CreationTimestamp:2020-07-15 14:24:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f ad11f37a-5845-4380-8d42-07d8632bb063 0xc002fdb847 0xc002fdb848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdb8c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdb8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:19 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-15 14:24:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.051: INFO: Pod "nginx-deployment-7b8c6f4498-2gpkz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2gpkz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-2gpkz,UID:8400931d-b666-4fde-ba8f-b5cce091e97c,ResourceVersion:1039097,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc002fdb9b0 0xc002fdb9b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdba20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdba40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.051: INFO: Pod "nginx-deployment-7b8c6f4498-2jlq6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2jlq6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-2jlq6,UID:c347d56d-9656-4cb6-97f2-6758c56f8d8b,ResourceVersion:1038989,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc002fdbac7 0xc002fdbac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdbb40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdbb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.250,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://86751bd7cd6375172a115b38f0a9727ab3432b9c87c6a6ebb989f518e000a3af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.052: INFO: Pod "nginx-deployment-7b8c6f4498-4f2kc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4f2kc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-4f2kc,UID:b1515924-63bb-465f-9073-d32d30e782cb,ResourceVersion:1039122,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc002fdbc37 0xc002fdbc38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdbcb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdbcd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-07-15 14:24:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.052: INFO: Pod "nginx-deployment-7b8c6f4498-4vs6g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4vs6g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-4vs6g,UID:d7280231-8aab-4940-8c5d-837e01c457a4,ResourceVersion:1038962,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc002fdbd97 0xc002fdbd98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdbe10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdbe30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.161,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7a2e562c6a23728320071e626536f275e259b8323de44b7266077b13053de1de}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.052: INFO: Pod "nginx-deployment-7b8c6f4498-8klkc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8klkc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-8klkc,UID:c120709f-6cad-47fe-ad6c-eb7d5417ebdd,ResourceVersion:1039089,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc002fdbf07 0xc002fdbf08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fdbf80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fdbfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.052: INFO: Pod "nginx-deployment-7b8c6f4498-9z7xx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9z7xx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-9z7xx,UID:c90e7835-f513-4817-a76d-e0db88844b52,ResourceVersion:1039099,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e027 0xc00373e028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373e0a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373e0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.052: INFO: Pod "nginx-deployment-7b8c6f4498-bhmdn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bhmdn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-bhmdn,UID:a17c127e-7708-49ea-b055-4be4e912309b,ResourceVersion:1038992,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e147 0xc00373e148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373e1c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373e1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.252,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e9020699a425799a3c42849b7ca70e50d0d40d856854014f98e495573c513a6e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.053: INFO: Pod "nginx-deployment-7b8c6f4498-bqttc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bqttc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-bqttc,UID:a42042d8-f5d0-4326-8fcf-7d174bc90bfe,ResourceVersion:1038986,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e2b7 0xc00373e2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373e330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373e350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.253,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c45ff3d42d16feeca02df637341711d8182ea69071f4ac1cb0c8bc1aef5c9482}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.053: INFO: Pod "nginx-deployment-7b8c6f4498-bs6wn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bs6wn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-bs6wn,UID:6b347d88-97bf-41a9-af09-4092e5e046e1,ResourceVersion:1038975,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e427 0xc00373e428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373e4a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373e4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.162,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://276a868ae111f9f9d60a11c7d750eccd436cd79d5c09cd488422329329bd5617}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.053: INFO: Pod "nginx-deployment-7b8c6f4498-dmh2k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dmh2k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-dmh2k,UID:4e3b28b3-e385-4873-9b85-3f6fd8866a15,ResourceVersion:1039129,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e597 0xc00373e598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373e610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373e630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-07-15 14:24:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.053: INFO: Pod "nginx-deployment-7b8c6f4498-h4245" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h4245,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-h4245,UID:5bf573a3-d01b-4baf-924a-b61f33b9c653,ResourceVersion:1039105,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e6f7 0xc00373e6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373e770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373e790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.053: INFO: Pod "nginx-deployment-7b8c6f4498-hwcvk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hwcvk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-hwcvk,UID:67ea7c08-e1de-4033-a7e0-d555dfd50a90,ResourceVersion:1038995,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e817 0xc00373e818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373e890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373e8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.251,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6b258a71c0d0eb7afa5ebaeb38cdada628237946a43c960e661ba2345d526d05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.053: INFO: Pod "nginx-deployment-7b8c6f4498-jgdtp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jgdtp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-jgdtp,UID:36cc103e-b220-4dee-9fd0-b3cb620ddce7,ResourceVersion:1039111,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373e987 0xc00373e988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373ea00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373ea20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.054: INFO: Pod "nginx-deployment-7b8c6f4498-jkbgx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jkbgx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-jkbgx,UID:fc998c51-14f8-4482-99b2-fa602d9459a9,ResourceVersion:1039091,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373eaa7 0xc00373eaa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373eb20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373eb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.054: INFO: Pod "nginx-deployment-7b8c6f4498-jzs2j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jzs2j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-jzs2j,UID:35f124e4-8fd8-4f95-aa2c-db2ec83159e2,ResourceVersion:1039110,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373ebc7 0xc00373ebc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373ec40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373ec60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.054: INFO: Pod "nginx-deployment-7b8c6f4498-kdx2w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kdx2w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-kdx2w,UID:f6addd1e-8fc1-458b-b22c-332914a7d1de,ResourceVersion:1039133,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373ece7 0xc00373ece8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373ed60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373ed80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-07-15 14:24:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.054: INFO: Pod "nginx-deployment-7b8c6f4498-m4nl6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m4nl6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-m4nl6,UID:4d41d56f-4d9f-4787-bd7e-2b3e94daf165,ResourceVersion:1038957,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373ee47 0xc00373ee48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373eec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373eee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.249,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bc9dc69ef229f78c201e261facfe900c4a553f0b6d82b6dc36a9b04eccf99de7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.054: INFO: Pod "nginx-deployment-7b8c6f4498-pl6vl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pl6vl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-pl6vl,UID:45b25a29-926f-45f8-a98e-052c6bc153d2,ResourceVersion:1039103,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373efb7 0xc00373efb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373f030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373f050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.054: INFO: Pod "nginx-deployment-7b8c6f4498-sglx5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sglx5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-sglx5,UID:0f1bebe3-89a8-4ae9-a1a1-8442ea45bd85,ResourceVersion:1038952,Generation:0,CreationTimestamp:2020-07-15 14:24:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373f0d7 0xc00373f0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373f150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373f170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.160,StartTime:2020-07-15 14:24:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-15 14:24:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://02676873686fc3df72d1f2e67e410f49842add7360320a6fd21f7ae26405c92c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 15 14:24:22.055: INFO: Pod "nginx-deployment-7b8c6f4498-snzvp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-snzvp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3681,SelfLink:/api/v1/namespaces/deployment-3681/pods/nginx-deployment-7b8c6f4498-snzvp,UID:0f2c2145-3d88-4cfb-8d55-3274255c0910,ResourceVersion:1039112,Generation:0,CreationTimestamp:2020-07-15 14:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d211a836-b693-4b37-9f95-2287b2eaf08c 0xc00373f247 0xc00373f248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69zt6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69zt6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-69zt6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00373f2c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00373f2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 14:24:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:24:22.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3681" for this suite.
Jul 15 14:24:42.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:24:42.248: INFO: namespace deployment-3681 deletion completed in 20.15858068s

• [SLOW TEST:37.173 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:24:42.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:24:42.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1932" for this suite.
Jul 15 14:24:48.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:24:48.490: INFO: namespace services-1932 deletion completed in 6.131350285s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.242 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:24:48.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0715 14:24:58.613871       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 15 14:24:58.613: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:24:58.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2218" for this suite.
Jul 15 14:25:04.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:25:04.707: INFO: namespace gc-2218 deletion completed in 6.090850804s

• [SLOW TEST:16.214 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:25:04.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-67fe1646-fd37-4d50-bd94-744360ea2eca
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:25:04.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6843" for this suite.
Jul 15 14:25:10.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:25:10.988: INFO: namespace configmap-6843 deletion completed in 6.149047076s

• [SLOW TEST:6.281 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:25:10.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul 15 14:25:16.080: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:25:17.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1490" for this suite.
Jul 15 14:25:39.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:25:39.313: INFO: namespace replicaset-1490 deletion completed in 22.196014426s

• [SLOW TEST:28.324 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:25:39.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 15 14:25:39.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5193'
Jul 15 14:25:39.466: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 15 14:25:39.466: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jul 15 14:25:39.502: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-v882t]
Jul 15 14:25:39.502: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-v882t" in namespace "kubectl-5193" to be "running and ready"
Jul 15 14:25:39.505: INFO: Pod "e2e-test-nginx-rc-v882t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.658489ms
Jul 15 14:25:41.508: INFO: Pod "e2e-test-nginx-rc-v882t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006259957s
Jul 15 14:25:43.513: INFO: Pod "e2e-test-nginx-rc-v882t": Phase="Running", Reason="", readiness=true. Elapsed: 4.010932221s
Jul 15 14:25:43.513: INFO: Pod "e2e-test-nginx-rc-v882t" satisfied condition "running and ready"
Jul 15 14:25:43.513: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-v882t]
Jul 15 14:25:43.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5193'
Jul 15 14:25:43.652: INFO: stderr: ""
Jul 15 14:25:43.652: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jul 15 14:25:43.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5193'
Jul 15 14:25:43.763: INFO: stderr: ""
Jul 15 14:25:43.763: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:25:43.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5193" for this suite.
Jul 15 14:26:05.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:26:05.920: INFO: namespace kubectl-5193 deletion completed in 22.122356661s

• [SLOW TEST:26.607 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:26:05.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-1138796b-bd97-4800-9a27-18d24f3fd1d2
STEP: Creating a pod to test consume secrets
Jul 15 14:26:06.003: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3" in namespace "projected-4622" to be "success or failure"
Jul 15 14:26:06.014: INFO: Pod "pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854859ms
Jul 15 14:26:08.109: INFO: Pod "pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106588207s
Jul 15 14:26:10.121: INFO: Pod "pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118227081s
STEP: Saw pod success
Jul 15 14:26:10.121: INFO: Pod "pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3" satisfied condition "success or failure"
Jul 15 14:26:10.163: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3 container projected-secret-volume-test: 
STEP: delete the pod
Jul 15 14:26:10.181: INFO: Waiting for pod pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3 to disappear
Jul 15 14:26:10.185: INFO: Pod pod-projected-secrets-9ef3be3a-a811-42e6-a299-2ffeea5e69f3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:26:10.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4622" for this suite.
Jul 15 14:26:16.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:26:16.283: INFO: namespace projected-4622 deletion completed in 6.091151475s

• [SLOW TEST:10.363 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:26:16.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jul 15 14:26:16.373: INFO: Waiting up to 5m0s for pod "downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb" in namespace "downward-api-1458" to be "success or failure"
Jul 15 14:26:16.395: INFO: Pod "downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.247404ms
Jul 15 14:26:18.399: INFO: Pod "downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026202783s
Jul 15 14:26:20.404: INFO: Pod "downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb": Phase="Running", Reason="", readiness=true. Elapsed: 4.030585968s
Jul 15 14:26:22.408: INFO: Pod "downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03496814s
STEP: Saw pod success
Jul 15 14:26:22.408: INFO: Pod "downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb" satisfied condition "success or failure"
Jul 15 14:26:22.412: INFO: Trying to get logs from node iruya-worker2 pod downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb container dapi-container: 
STEP: delete the pod
Jul 15 14:26:22.434: INFO: Waiting for pod downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb to disappear
Jul 15 14:26:22.437: INFO: Pod downward-api-4d613b2c-8526-4de6-b368-932ad3258bbb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:26:22.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1458" for this suite.
Jul 15 14:26:28.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:26:28.539: INFO: namespace downward-api-1458 deletion completed in 6.097384272s

• [SLOW TEST:12.255 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:26:28.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0715 14:26:29.664587       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 15 14:26:29.664: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:26:29.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-718" for this suite.
Jul 15 14:26:35.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:26:35.849: INFO: namespace gc-718 deletion completed in 6.181584706s

• [SLOW TEST:7.310 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:26:35.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 15 14:26:35.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561" in namespace "projected-3086" to be "success or failure"
Jul 15 14:26:35.996: INFO: Pod "downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561": Phase="Pending", Reason="", readiness=false. Elapsed: 3.484339ms
Jul 15 14:26:38.097: INFO: Pod "downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105199438s
Jul 15 14:26:40.101: INFO: Pod "downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108890529s
STEP: Saw pod success
Jul 15 14:26:40.101: INFO: Pod "downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561" satisfied condition "success or failure"
Jul 15 14:26:40.104: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561 container client-container: 
STEP: delete the pod
Jul 15 14:26:40.178: INFO: Waiting for pod downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561 to disappear
Jul 15 14:26:40.198: INFO: Pod downwardapi-volume-0f695753-13d0-4f82-b997-533c3a4ad561 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:26:40.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3086" for this suite.
Jul 15 14:26:46.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:26:46.317: INFO: namespace projected-3086 deletion completed in 6.114889152s

• [SLOW TEST:10.468 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:26:46.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jul 15 14:26:50.902: INFO: Successfully updated pod "labelsupdate0c23c823-b6c0-43af-9106-df98bf6a586f"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:26:52.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-519" for this suite.
Jul 15 14:27:14.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:27:15.028: INFO: namespace downward-api-519 deletion completed in 22.102390554s

• [SLOW TEST:28.711 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:27:15.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:27:19.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5808" for this suite.
Jul 15 14:28:09.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:28:09.283: INFO: namespace kubelet-test-5808 deletion completed in 50.106475005s

• [SLOW TEST:54.255 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:28:09.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 15 14:28:09.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6167'
Jul 15 14:28:09.837: INFO: stderr: ""
Jul 15 14:28:09.837: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jul 15 14:28:09.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6167'
Jul 15 14:28:16.740: INFO: stderr: ""
Jul 15 14:28:16.740: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:28:16.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6167" for this suite.
Jul 15 14:28:22.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:28:22.846: INFO: namespace kubectl-6167 deletion completed in 6.100005798s

• [SLOW TEST:13.563 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:28:22.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-pn6j
STEP: Creating a pod to test atomic-volume-subpath
Jul 15 14:28:22.946: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pn6j" in namespace "subpath-5860" to be "success or failure"
Jul 15 14:28:22.963: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Pending", Reason="", readiness=false. Elapsed: 17.443123ms
Jul 15 14:28:25.028: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08265526s
Jul 15 14:28:27.032: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 4.086615381s
Jul 15 14:28:29.036: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 6.089833969s
Jul 15 14:28:31.040: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 8.093950681s
Jul 15 14:28:33.044: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 10.098343804s
Jul 15 14:28:35.049: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 12.1027994s
Jul 15 14:28:37.053: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 14.107161183s
Jul 15 14:28:39.057: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 16.11131881s
Jul 15 14:28:41.061: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 18.114875436s
Jul 15 14:28:43.065: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 20.119284764s
Jul 15 14:28:45.069: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Running", Reason="", readiness=true. Elapsed: 22.123158433s
Jul 15 14:28:47.073: INFO: Pod "pod-subpath-test-configmap-pn6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.127589109s
STEP: Saw pod success
Jul 15 14:28:47.073: INFO: Pod "pod-subpath-test-configmap-pn6j" satisfied condition "success or failure"
Jul 15 14:28:47.077: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-pn6j container test-container-subpath-configmap-pn6j: 
STEP: delete the pod
Jul 15 14:28:47.095: INFO: Waiting for pod pod-subpath-test-configmap-pn6j to disappear
Jul 15 14:28:47.100: INFO: Pod pod-subpath-test-configmap-pn6j no longer exists
STEP: Deleting pod pod-subpath-test-configmap-pn6j
Jul 15 14:28:47.100: INFO: Deleting pod "pod-subpath-test-configmap-pn6j" in namespace "subpath-5860"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:28:47.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5860" for this suite.
Jul 15 14:28:53.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:28:53.231: INFO: namespace subpath-5860 deletion completed in 6.12499548s

• [SLOW TEST:30.384 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:28:53.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 15 14:28:53.330: INFO: Create a RollingUpdate DaemonSet
Jul 15 14:28:53.333: INFO: Check that daemon pods launch on every node of the cluster
Jul 15 14:28:53.340: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:28:53.357: INFO: Number of nodes with available pods: 0
Jul 15 14:28:53.357: INFO: Node iruya-worker is running more than one daemon pod
Jul 15 14:28:54.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:28:54.366: INFO: Number of nodes with available pods: 0
Jul 15 14:28:54.366: INFO: Node iruya-worker is running more than one daemon pod
Jul 15 14:28:55.369: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:28:55.371: INFO: Number of nodes with available pods: 0
Jul 15 14:28:55.371: INFO: Node iruya-worker is running more than one daemon pod
Jul 15 14:28:56.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:28:56.366: INFO: Number of nodes with available pods: 0
Jul 15 14:28:56.367: INFO: Node iruya-worker is running more than one daemon pod
Jul 15 14:28:57.411: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:28:57.414: INFO: Number of nodes with available pods: 2
Jul 15 14:28:57.414: INFO: Number of running nodes: 2, number of available pods: 2
Jul 15 14:28:57.414: INFO: Update the DaemonSet to trigger a rollout
Jul 15 14:28:57.419: INFO: Updating DaemonSet daemon-set
Jul 15 14:29:07.442: INFO: Roll back the DaemonSet before rollout is complete
Jul 15 14:29:07.449: INFO: Updating DaemonSet daemon-set
Jul 15 14:29:07.449: INFO: Make sure DaemonSet rollback is complete
Jul 15 14:29:07.460: INFO: Wrong image for pod: daemon-set-qlwcs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul 15 14:29:07.461: INFO: Pod daemon-set-qlwcs is not available
Jul 15 14:29:07.573: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:29:08.577: INFO: Wrong image for pod: daemon-set-qlwcs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul 15 14:29:08.578: INFO: Pod daemon-set-qlwcs is not available
Jul 15 14:29:08.582: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:29:09.577: INFO: Wrong image for pod: daemon-set-qlwcs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul 15 14:29:09.577: INFO: Pod daemon-set-qlwcs is not available
Jul 15 14:29:09.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 15 14:29:10.590: INFO: Pod daemon-set-zc4bn is not available
Jul 15 14:29:10.595: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-730, will wait for the garbage collector to delete the pods
Jul 15 14:29:10.660: INFO: Deleting DaemonSet.extensions daemon-set took: 6.367497ms
Jul 15 14:29:10.960: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.234347ms
Jul 15 14:29:16.762: INFO: Number of nodes with available pods: 0
Jul 15 14:29:16.762: INFO: Number of running nodes: 0, number of available pods: 0
Jul 15 14:29:16.765: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-730/daemonsets","resourceVersion":"1040393"},"items":null}

Jul 15 14:29:16.767: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-730/pods","resourceVersion":"1040393"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:29:16.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-730" for this suite.
Jul 15 14:29:22.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:29:22.895: INFO: namespace daemonsets-730 deletion completed in 6.113570194s

• [SLOW TEST:29.665 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:29:22.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 15 14:29:23.049: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"27b07bbb-eef8-4bc0-bd7e-9f2d3ad99e03", Controller:(*bool)(0xc001e4127a), BlockOwnerDeletion:(*bool)(0xc001e4127b)}}
Jul 15 14:29:23.061: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"42bec780-712f-477e-aa7f-daa337c43711", Controller:(*bool)(0xc001e4140a), BlockOwnerDeletion:(*bool)(0xc001e4140b)}}
Jul 15 14:29:23.079: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"17419325-a9d5-41a7-a1ab-75240dbe9852", Controller:(*bool)(0xc002ee6eb2), BlockOwnerDeletion:(*bool)(0xc002ee6eb3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:29:28.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8548" for this suite.
Jul 15 14:29:34.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:29:34.236: INFO: namespace gc-8548 deletion completed in 6.089082158s

• [SLOW TEST:11.340 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:29:34.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-f7241a9a-d5dd-4133-912a-d78cac61f328
STEP: Creating a pod to test consume configMaps
Jul 15 14:29:34.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7" in namespace "configmap-3765" to be "success or failure"
Jul 15 14:29:34.350: INFO: Pod "pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.666425ms
Jul 15 14:29:36.354: INFO: Pod "pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035637672s
Jul 15 14:29:38.359: INFO: Pod "pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039983915s
STEP: Saw pod success
Jul 15 14:29:38.359: INFO: Pod "pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7" satisfied condition "success or failure"
Jul 15 14:29:38.361: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7 container configmap-volume-test: 
STEP: delete the pod
Jul 15 14:29:38.384: INFO: Waiting for pod pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7 to disappear
Jul 15 14:29:38.416: INFO: Pod pod-configmaps-a398ec5b-92c4-4521-85c1-03422bd3fdd7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:29:38.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3765" for this suite.
Jul 15 14:29:44.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:29:44.547: INFO: namespace configmap-3765 deletion completed in 6.126915506s

• [SLOW TEST:10.310 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:29:44.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2bcac0a8-ed21-47f6-b70b-41e234e1d86f
STEP: Creating a pod to test consume configMaps
Jul 15 14:29:44.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e" in namespace "configmap-2043" to be "success or failure"
Jul 15 14:29:44.625: INFO: Pod "pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.119157ms
Jul 15 14:29:46.630: INFO: Pod "pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017661559s
Jul 15 14:29:48.634: INFO: Pod "pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021513362s
STEP: Saw pod success
Jul 15 14:29:48.634: INFO: Pod "pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e" satisfied condition "success or failure"
Jul 15 14:29:48.636: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e container configmap-volume-test: 
STEP: delete the pod
Jul 15 14:29:48.680: INFO: Waiting for pod pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e to disappear
Jul 15 14:29:48.688: INFO: Pod pod-configmaps-414056b8-c075-49ce-84ad-53675de7550e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:29:48.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2043" for this suite.
Jul 15 14:29:54.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:29:54.778: INFO: namespace configmap-2043 deletion completed in 6.085745327s

• [SLOW TEST:10.230 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:29:54.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8
Jul 15 14:29:54.861: INFO: Pod name my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8: Found 0 pods out of 1
Jul 15 14:29:59.907: INFO: Pod name my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8: Found 1 pods out of 1
Jul 15 14:29:59.907: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8" are running
Jul 15 14:29:59.910: INFO: Pod "my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8-nz6vz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 14:29:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 14:29:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 14:29:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 14:29:54 +0000 UTC Reason: Message:}])
Jul 15 14:29:59.910: INFO: Trying to dial the pod
Jul 15 14:30:04.937: INFO: Controller my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8: Got expected result from replica 1 [my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8-nz6vz]: "my-hostname-basic-c596c02f-a42e-4dfe-adba-ddf6234650e8-nz6vz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:30:04.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5927" for this suite.
Jul 15 14:30:10.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:30:11.036: INFO: namespace replication-controller-5927 deletion completed in 6.096410927s

• [SLOW TEST:16.258 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:30:11.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jul 15 14:30:11.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul 15 14:30:11.318: INFO: stderr: ""
Jul 15 14:30:11.318: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:30:11.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7929" for this suite.
Jul 15 14:30:17.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:30:17.539: INFO: namespace kubectl-7929 deletion completed in 6.193912571s

• [SLOW TEST:6.503 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:30:17.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 15 14:30:17.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b" in namespace "downward-api-1443" to be "success or failure"
Jul 15 14:30:17.837: INFO: Pod "downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b": Phase="Pending", Reason="", readiness=false. Elapsed: 140.023773ms
Jul 15 14:30:20.100: INFO: Pod "downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402416966s
Jul 15 14:30:22.104: INFO: Pod "downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.40648534s
STEP: Saw pod success
Jul 15 14:30:22.104: INFO: Pod "downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b" satisfied condition "success or failure"
Jul 15 14:30:22.107: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b container client-container: 
STEP: delete the pod
Jul 15 14:30:22.148: INFO: Waiting for pod downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b to disappear
Jul 15 14:30:22.189: INFO: Pod downwardapi-volume-a69cc1c7-df01-4aaf-bbaf-9778b83be51b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:30:22.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1443" for this suite.
Jul 15 14:30:28.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:30:28.284: INFO: namespace downward-api-1443 deletion completed in 6.09006711s

• [SLOW TEST:10.743 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:30:28.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul 15 14:30:32.455: INFO: Waiting up to 5m0s for pod "client-envvars-420c004c-4191-4f1d-b50e-e20722c67013" in namespace "pods-4095" to be "success or failure"
Jul 15 14:30:32.499: INFO: Pod "client-envvars-420c004c-4191-4f1d-b50e-e20722c67013": Phase="Pending", Reason="", readiness=false. Elapsed: 43.308673ms
Jul 15 14:30:34.503: INFO: Pod "client-envvars-420c004c-4191-4f1d-b50e-e20722c67013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047586365s
Jul 15 14:30:36.506: INFO: Pod "client-envvars-420c004c-4191-4f1d-b50e-e20722c67013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05125555s
STEP: Saw pod success
Jul 15 14:30:36.507: INFO: Pod "client-envvars-420c004c-4191-4f1d-b50e-e20722c67013" satisfied condition "success or failure"
Jul 15 14:30:36.509: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-420c004c-4191-4f1d-b50e-e20722c67013 container env3cont: 
STEP: delete the pod
Jul 15 14:30:36.541: INFO: Waiting for pod client-envvars-420c004c-4191-4f1d-b50e-e20722c67013 to disappear
Jul 15 14:30:36.552: INFO: Pod client-envvars-420c004c-4191-4f1d-b50e-e20722c67013 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:30:36.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4095" for this suite.
Jul 15 14:31:18.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:31:18.656: INFO: namespace pods-4095 deletion completed in 42.100844986s

• [SLOW TEST:50.372 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:31:18.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 15 14:31:18.745: INFO: Waiting up to 5m0s for pod "pod-1cf4dee7-d30c-43f4-878d-51bd612efda7" in namespace "emptydir-1558" to be "success or failure"
Jul 15 14:31:18.749: INFO: Pod "pod-1cf4dee7-d30c-43f4-878d-51bd612efda7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168493ms
Jul 15 14:31:20.765: INFO: Pod "pod-1cf4dee7-d30c-43f4-878d-51bd612efda7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020091049s
Jul 15 14:31:22.769: INFO: Pod "pod-1cf4dee7-d30c-43f4-878d-51bd612efda7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02441082s
STEP: Saw pod success
Jul 15 14:31:22.769: INFO: Pod "pod-1cf4dee7-d30c-43f4-878d-51bd612efda7" satisfied condition "success or failure"
Jul 15 14:31:22.772: INFO: Trying to get logs from node iruya-worker2 pod pod-1cf4dee7-d30c-43f4-878d-51bd612efda7 container test-container: 
STEP: delete the pod
Jul 15 14:31:22.813: INFO: Waiting for pod pod-1cf4dee7-d30c-43f4-878d-51bd612efda7 to disappear
Jul 15 14:31:22.827: INFO: Pod pod-1cf4dee7-d30c-43f4-878d-51bd612efda7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:31:22.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1558" for this suite.
Jul 15 14:31:28.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:31:28.915: INFO: namespace emptydir-1558 deletion completed in 6.085499985s

• [SLOW TEST:10.259 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:31:28.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 15 14:31:33.093: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:31:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9473" for this suite.
Jul 15 14:31:39.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:31:39.217: INFO: namespace container-runtime-9473 deletion completed in 6.10388635s

• [SLOW TEST:10.301 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:31:39.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 15 14:31:43.329: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:31:43.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4304" for this suite.
Jul 15 14:31:49.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:31:49.480: INFO: namespace container-runtime-4304 deletion completed in 6.127229707s

• [SLOW TEST:10.263 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:31:49.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-970709a2-bbcf-47bf-8316-4be30651e8bf
STEP: Creating a pod to test consume configMaps
Jul 15 14:31:49.568: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a" in namespace "configmap-3609" to be "success or failure"
Jul 15 14:31:49.589: INFO: Pod "pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.655136ms
Jul 15 14:31:51.597: INFO: Pod "pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029520895s
Jul 15 14:31:53.615: INFO: Pod "pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047037924s
STEP: Saw pod success
Jul 15 14:31:53.615: INFO: Pod "pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a" satisfied condition "success or failure"
Jul 15 14:31:53.618: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a container configmap-volume-test: 
STEP: delete the pod
Jul 15 14:31:53.637: INFO: Waiting for pod pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a to disappear
Jul 15 14:31:53.795: INFO: Pod pod-configmaps-7b871dbc-86c9-45f2-8933-96966606321a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:31:53.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3609" for this suite.
Jul 15 14:31:59.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:31:59.921: INFO: namespace configmap-3609 deletion completed in 6.122280245s

• [SLOW TEST:10.441 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:31:59.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-5b99358f-87bb-47be-9144-79adabbd4fdf
STEP: Creating a pod to test consume configMaps
Jul 15 14:32:00.034: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7" in namespace "projected-2489" to be "success or failure"
Jul 15 14:32:00.049: INFO: Pod "pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.245971ms
Jul 15 14:32:02.053: INFO: Pod "pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019233307s
Jul 15 14:32:04.057: INFO: Pod "pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022814701s
STEP: Saw pod success
Jul 15 14:32:04.057: INFO: Pod "pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7" satisfied condition "success or failure"
Jul 15 14:32:04.060: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 15 14:32:04.076: INFO: Waiting for pod pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7 to disappear
Jul 15 14:32:04.106: INFO: Pod pod-projected-configmaps-958d6483-5c1c-42b9-8687-bd4e69fcefc7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:32:04.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2489" for this suite.
Jul 15 14:32:10.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:32:10.199: INFO: namespace projected-2489 deletion completed in 6.088404763s

• [SLOW TEST:10.277 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:32:10.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-734ad67f-8642-4651-b4a9-78bd93ab913b
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-734ad67f-8642-4651-b4a9-78bd93ab913b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:32:16.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-695" for this suite.
Jul 15 14:32:38.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:32:38.413: INFO: namespace projected-695 deletion completed in 22.09405868s

• [SLOW TEST:28.214 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:32:38.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 15 14:32:38.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9" in namespace "downward-api-7695" to be "success or failure"
Jul 15 14:32:38.499: INFO: Pod "downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.384114ms
Jul 15 14:32:40.502: INFO: Pod "downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006688081s
Jul 15 14:32:42.506: INFO: Pod "downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010702958s
STEP: Saw pod success
Jul 15 14:32:42.506: INFO: Pod "downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9" satisfied condition "success or failure"
Jul 15 14:32:42.509: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9 container client-container: 
STEP: delete the pod
Jul 15 14:32:42.556: INFO: Waiting for pod downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9 to disappear
Jul 15 14:32:42.615: INFO: Pod downwardapi-volume-d1c4756b-97cd-4d0e-a376-266d7974b9a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:32:42.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7695" for this suite.
Jul 15 14:32:48.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:32:48.741: INFO: namespace downward-api-7695 deletion completed in 6.098518035s

• [SLOW TEST:10.327 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul 15 14:32:48.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul 15 14:32:48.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c" in namespace "downward-api-6898" to be "success or failure"
Jul 15 14:32:48.885: INFO: Pod "downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.05162ms
Jul 15 14:32:50.888: INFO: Pod "downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031555644s
Jul 15 14:32:52.892: INFO: Pod "downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035394948s
STEP: Saw pod success
Jul 15 14:32:52.892: INFO: Pod "downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c" satisfied condition "success or failure"
Jul 15 14:32:52.895: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c container client-container: 
STEP: delete the pod
Jul 15 14:32:52.947: INFO: Waiting for pod downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c to disappear
Jul 15 14:32:52.974: INFO: Pod downwardapi-volume-e5e5893e-6e6b-452e-b80d-dc53d6908c2c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul 15 14:32:52.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6898" for this suite.
Jul 15 14:32:58.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 15 14:32:59.070: INFO: namespace downward-api-6898 deletion completed in 6.092066091s

• [SLOW TEST:10.329 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSJul 15 14:32:59.071: INFO: Running AfterSuite actions on all nodes
Jul 15 14:32:59.071: INFO: Running AfterSuite actions on node 1
Jul 15 14:32:59.071: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 5830.049 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS