I0830 16:28:19.302539 7 e2e.go:243] Starting e2e run "8a135acd-3c95-4211-a475-8eba91622e1c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598804887 - Will randomize all specs Will run 215 of 4413 specs Aug 30 16:28:20.667: INFO: >>> kubeConfig: /root/.kube/config Aug 30 16:28:20.752: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 30 16:28:21.001: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 30 16:28:21.162: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 30 16:28:21.162: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 30 16:28:21.162: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 30 16:28:21.207: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 30 16:28:21.207: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 30 16:28:21.207: INFO: e2e test version: v1.15.12 Aug 30 16:28:21.211: INFO: kube-apiserver version: v1.15.12 [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:28:21.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Aug 30 16:28:21.332: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 16:28:21.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20" in namespace "projected-6230" to be "success or failure" Aug 30 16:28:21.465: INFO: Pod "downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20": Phase="Pending", Reason="", readiness=false. Elapsed: 67.95782ms Aug 30 16:28:23.474: INFO: Pod "downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077354405s Aug 30 16:28:25.512: INFO: Pod "downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115090008s Aug 30 16:28:27.519: INFO: Pod "downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122568203s STEP: Saw pod success Aug 30 16:28:27.520: INFO: Pod "downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20" satisfied condition "success or failure" Aug 30 16:28:27.525: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20 container client-container: STEP: delete the pod Aug 30 16:28:27.568: INFO: Waiting for pod downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20 to disappear Aug 30 16:28:27.572: INFO: Pod downwardapi-volume-90a8e52d-5376-4048-8b53-bd2cc3541c20 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:28:27.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6230" for this suite. Aug 30 16:28:33.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:28:33.788: INFO: namespace projected-6230 deletion completed in 6.200109902s • [SLOW TEST:12.575 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:28:33.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8349.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8349.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 30 16:28:42.036: INFO: DNS probes using dns-8349/dns-test-cfe2faaf-cb53-452c-8754-fed673bf3101 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:28:42.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8349" for this suite. Aug 30 16:28:48.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:28:48.379: INFO: namespace dns-8349 deletion completed in 6.237174532s • [SLOW TEST:14.583 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:28:48.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 30 16:28:48.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3118' Aug 30 16:28:53.117: INFO: stderr: "" Aug 30 16:28:53.117: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Aug 30 16:28:53.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3118' Aug 30 16:29:03.694: INFO: stderr: "" Aug 30 16:29:03.694: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:29:03.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3118" for this suite. Aug 30 16:29:09.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:29:09.863: INFO: namespace kubectl-3118 deletion completed in 6.157194321s • [SLOW TEST:21.482 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:29:09.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 16:29:09.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d" in namespace "downward-api-7663" to be "success or failure" Aug 30 16:29:10.006: INFO: Pod "downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.361747ms Aug 30 16:29:12.486: INFO: Pod "downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.495122599s Aug 30 16:29:14.513: INFO: Pod "downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d": Phase="Running", Reason="", readiness=true. Elapsed: 4.522868971s Aug 30 16:29:16.520: INFO: Pod "downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.52925718s STEP: Saw pod success Aug 30 16:29:16.520: INFO: Pod "downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d" satisfied condition "success or failure" Aug 30 16:29:16.524: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d container client-container: STEP: delete the pod Aug 30 16:29:16.677: INFO: Waiting for pod downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d to disappear Aug 30 16:29:16.730: INFO: Pod downwardapi-volume-9f93b55b-2d8f-40bc-81b6-02902213795d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:29:16.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7663" for this suite. Aug 30 16:29:22.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:29:23.099: INFO: namespace downward-api-7663 deletion completed in 6.362243383s • [SLOW TEST:13.233 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:29:23.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Aug 30 16:29:23.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4472' Aug 30 16:29:24.955: INFO: stderr: "" Aug 30 16:29:24.955: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 30 16:29:24.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4472' Aug 30 16:29:26.296: INFO: stderr: "" Aug 30 16:29:26.296: INFO: stdout: "update-demo-nautilus-6h7hc update-demo-nautilus-r26x4 " Aug 30 16:29:26.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6h7hc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:29:27.755: INFO: stderr: "" Aug 30 16:29:27.755: INFO: stdout: "" Aug 30 16:29:27.755: INFO: update-demo-nautilus-6h7hc is created but not running Aug 30 16:29:32.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4472' Aug 30 16:29:34.098: INFO: stderr: "" Aug 30 16:29:34.098: INFO: stdout: "update-demo-nautilus-6h7hc update-demo-nautilus-r26x4 " Aug 30 16:29:34.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6h7hc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:29:35.358: INFO: stderr: "" Aug 30 16:29:35.359: INFO: stdout: "true" Aug 30 16:29:35.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6h7hc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:29:36.642: INFO: stderr: "" Aug 30 16:29:36.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 16:29:36.642: INFO: validating pod update-demo-nautilus-6h7hc Aug 30 16:29:36.648: INFO: got data: { "image": "nautilus.jpg" } Aug 30 16:29:36.648: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 16:29:36.649: INFO: update-demo-nautilus-6h7hc is verified up and running Aug 30 16:29:36.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r26x4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:29:37.935: INFO: stderr: "" Aug 30 16:29:37.935: INFO: stdout: "true" Aug 30 16:29:37.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r26x4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:29:39.222: INFO: stderr: "" Aug 30 16:29:39.222: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 16:29:39.222: INFO: validating pod update-demo-nautilus-r26x4 Aug 30 16:29:39.244: INFO: got data: { "image": "nautilus.jpg" } Aug 30 16:29:39.244: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 16:29:39.244: INFO: update-demo-nautilus-r26x4 is verified up and running STEP: rolling-update to new replication controller Aug 30 16:29:39.254: INFO: scanned /root for discovery docs: Aug 30 16:29:39.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4472' Aug 30 16:30:05.674: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 30 16:30:05.675: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 30 16:30:05.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4472' Aug 30 16:30:07.007: INFO: stderr: "" Aug 30 16:30:07.007: INFO: stdout: "update-demo-kitten-7pp5h update-demo-kitten-tlcjh " Aug 30 16:30:07.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7pp5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:30:08.327: INFO: stderr: "" Aug 30 16:30:08.328: INFO: stdout: "true" Aug 30 16:30:08.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7pp5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:30:09.679: INFO: stderr: "" Aug 30 16:30:09.679: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 30 16:30:09.679: INFO: validating pod update-demo-kitten-7pp5h Aug 30 16:30:09.684: INFO: got data: { "image": "kitten.jpg" } Aug 30 16:30:09.684: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 30 16:30:09.684: INFO: update-demo-kitten-7pp5h is verified up and running Aug 30 16:30:09.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tlcjh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:30:10.940: INFO: stderr: "" Aug 30 16:30:10.941: INFO: stdout: "true" Aug 30 16:30:10.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tlcjh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4472' Aug 30 16:30:12.215: INFO: stderr: "" Aug 30 16:30:12.215: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 30 16:30:12.215: INFO: validating pod update-demo-kitten-tlcjh Aug 30 16:30:12.239: INFO: got data: { "image": "kitten.jpg" } Aug 30 16:30:12.239: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 30 16:30:12.239: INFO: update-demo-kitten-tlcjh is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:30:12.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4472" for this suite. Aug 30 16:30:36.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:30:36.397: INFO: namespace kubectl-4472 deletion completed in 24.147870245s • [SLOW TEST:73.297 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:30:36.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Aug 30 16:30:36.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9561' Aug 30 16:30:38.213: INFO: stderr: "" Aug 30 16:30:38.213: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 30 16:30:39.238: INFO: Selector matched 1 pods for map[app:redis] Aug 30 16:30:39.241: INFO: Found 0 / 1 Aug 30 16:30:40.236: INFO: Selector matched 1 pods for map[app:redis] Aug 30 16:30:40.236: INFO: Found 0 / 1 Aug 30 16:30:41.233: INFO: Selector matched 1 pods for map[app:redis] Aug 30 16:30:41.234: INFO: Found 0 / 1 Aug 30 16:30:42.236: INFO: Selector matched 1 pods for map[app:redis] Aug 30 16:30:42.237: INFO: Found 1 / 1 Aug 30 16:30:42.238: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 30 16:30:42.243: INFO: Selector matched 1 pods for map[app:redis] Aug 30 16:30:42.244: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 30 16:30:42.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-dnhlb --namespace=kubectl-9561 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 30 16:30:43.511: INFO: stderr: "" Aug 30 16:30:43.511: INFO: stdout: "pod/redis-master-dnhlb patched\n" STEP: checking annotations Aug 30 16:30:43.516: INFO: Selector matched 1 pods for map[app:redis] Aug 30 16:30:43.516: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:30:43.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9561" for this suite. Aug 30 16:31:05.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:31:06.054: INFO: namespace kubectl-9561 deletion completed in 22.531014897s • [SLOW TEST:29.654 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:31:06.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-hsp47 in namespace proxy-9695 I0830 16:31:06.755699 7 runners.go:180] Created replication controller with name: proxy-service-hsp47, namespace: proxy-9695, replica count: 1 I0830 16:31:07.810260 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0830 16:31:08.811527 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0830 16:31:09.812590 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0830 16:31:10.813998 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0830 16:31:11.814668 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0830 16:31:12.815415 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0830 16:31:13.816185 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0830 16:31:14.816797 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0830 16:31:15.817450 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0830 16:31:16.818079 7 runners.go:180] proxy-service-hsp47 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 30 16:31:16.860: INFO: setup took 10.448604431s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 30 16:31:17.286: INFO: (0) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 424.491985ms) Aug 30 16:31:17.286: INFO: (0) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 424.656772ms) Aug 30 16:31:17.286: INFO: (0) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 425.00673ms) Aug 30 16:31:17.286: INFO: (0) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 424.840485ms) Aug 30 16:31:17.286: INFO: (0) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 422.184977ms) Aug 30 16:31:17.286: INFO: (0) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 424.934568ms) Aug 30 16:31:17.286: INFO: (0) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 424.893181ms) Aug 30 16:31:17.289: INFO: (0) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 427.439278ms) Aug 30 16:31:17.289: INFO: (0) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 428.293578ms) Aug 30 16:31:17.289: INFO: (0) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 428.146034ms) Aug 30 16:31:17.290: INFO: (0) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 428.48237ms) Aug 30 16:31:17.346: INFO: (0) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 484.502586ms) Aug 30 16:31:17.346: INFO: (0) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test<... (200; 151.052287ms) Aug 30 16:31:17.499: INFO: (1) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 150.853175ms) Aug 30 16:31:17.499: INFO: (1) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 151.913998ms) Aug 30 16:31:17.499: INFO: (1) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 151.791779ms) Aug 30 16:31:17.500: INFO: (1) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 151.948981ms) Aug 30 16:31:17.500: INFO: (1) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 152.261954ms) Aug 30 16:31:17.500: INFO: (1) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 152.784665ms) Aug 30 16:31:17.500: INFO: (1) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 152.942688ms) Aug 30 16:31:17.500: INFO: (1) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 152.468554ms) Aug 30 16:31:17.511: INFO: (1) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 163.700216ms) Aug 30 16:31:17.511: INFO: (1) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 163.315844ms) Aug 30 16:31:17.511: INFO: (1) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 163.551974ms) Aug 30 16:31:17.511: INFO: (1) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 163.675198ms) Aug 30 16:31:17.511: INFO: (1) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 164.218194ms) Aug 30 16:31:17.511: INFO: (1) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 164.175023ms) Aug 30 16:31:17.516: INFO: (2) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 4.774288ms) Aug 30 16:31:17.517: INFO: (2) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.601763ms) Aug 30 16:31:17.518: INFO: (2) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 6.219174ms) Aug 30 16:31:17.518: INFO: (2) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 6.535407ms) Aug 30 16:31:17.518: INFO: (2) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 6.428705ms) Aug 30 16:31:17.519: INFO: (2) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 6.876267ms) Aug 30 16:31:17.519: INFO: (2) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 7.080431ms) Aug 30 16:31:17.519: INFO: (2) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 6.975096ms) Aug 30 16:31:17.519: INFO: (2) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 7.649942ms) Aug 30 16:31:17.519: INFO: (2) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 7.680201ms) Aug 30 16:31:17.522: INFO: (2) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test<... (200; 10.469786ms) Aug 30 16:31:17.522: INFO: (2) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 9.955509ms) Aug 30 16:31:17.523: INFO: (2) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 10.661601ms) Aug 30 16:31:17.523: INFO: (2) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 10.7304ms) Aug 30 16:31:17.523: INFO: (2) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 10.1598ms) Aug 30 16:31:17.526: INFO: (3) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 4.844158ms) Aug 30 16:31:17.529: INFO: (3) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 5.329434ms) Aug 30 16:31:17.529: INFO: (3) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 5.909818ms) Aug 30 16:31:17.529: INFO: (3) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 5.944576ms) Aug 30 16:31:17.529: INFO: (3) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 6.189772ms) Aug 30 16:31:17.529: INFO: (3) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 6.269935ms) Aug 30 16:31:17.530: INFO: (3) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 6.300035ms) Aug 30 16:31:17.530: INFO: (3) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 6.590258ms) Aug 30 16:31:17.531: INFO: (3) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 7.325308ms) Aug 30 16:31:17.531: INFO: (3) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 7.240442ms) Aug 30 16:31:17.531: INFO: (3) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 7.452384ms) Aug 30 16:31:17.531: INFO: (3) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 7.881906ms) Aug 30 16:31:17.532: INFO: (3) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 8.431032ms) Aug 30 16:31:17.533: INFO: (3) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 9.13811ms) Aug 30 16:31:17.533: INFO: (3) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 9.1117ms) Aug 30 16:31:17.536: INFO: (4) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 3.24352ms) Aug 30 16:31:17.537: INFO: (4) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 3.820008ms) Aug 30 16:31:17.538: INFO: (4) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 4.648409ms) Aug 30 16:31:17.538: INFO: (4) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 4.047483ms) Aug 30 16:31:17.538: INFO: (4) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 5.274496ms) Aug 30 16:31:17.538: INFO: (4) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test<... (200; 5.289302ms) Aug 30 16:31:17.539: INFO: (4) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 5.350112ms) Aug 30 16:31:17.539: INFO: (4) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.22161ms) Aug 30 16:31:17.539: INFO: (4) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 5.427699ms) Aug 30 16:31:17.539: INFO: (4) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 5.803846ms) Aug 30 16:31:17.539: INFO: (4) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 6.325409ms) Aug 30 16:31:17.539: INFO: (4) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 6.326781ms) Aug 30 16:31:17.541: INFO: (4) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 7.758864ms) Aug 30 16:31:17.541: INFO: (4) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 7.668105ms) Aug 30 16:31:17.541: INFO: (4) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 7.68157ms) Aug 30 16:31:17.546: INFO: (5) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 4.767126ms) Aug 30 16:31:17.546: INFO: (5) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 4.642792ms) Aug 30 16:31:17.546: INFO: (5) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 4.789075ms) Aug 30 16:31:17.547: INFO: (5) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 5.954823ms) Aug 30 16:31:17.548: INFO: (5) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 6.193415ms) Aug 30 16:31:17.548: INFO: (5) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 5.933371ms) Aug 30 16:31:17.551: INFO: (5) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 9.465276ms) Aug 30 16:31:17.552: INFO: (5) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 9.618715ms) Aug 30 16:31:17.552: INFO: (5) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 10.200789ms) Aug 30 16:31:17.552: INFO: (5) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 10.482193ms) Aug 30 16:31:17.553: INFO: (5) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 10.806547ms) Aug 30 16:31:17.553: INFO: (5) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 10.974786ms) Aug 30 16:31:17.558: INFO: (6) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 5.409469ms) Aug 30 16:31:17.559: INFO: (6) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 5.488631ms) Aug 30 16:31:17.559: INFO: (6) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 5.644921ms) Aug 30 16:31:17.559: INFO: (6) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.829889ms) Aug 30 16:31:17.559: INFO: (6) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.888787ms) Aug 30 16:31:17.560: INFO: (6) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 6.938216ms) Aug 30 16:31:17.560: INFO: (6) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: ... (200; 8.371906ms) Aug 30 16:31:17.561: INFO: (6) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 8.124033ms) Aug 30 16:31:17.563: INFO: (6) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 10.419291ms) Aug 30 16:31:17.570: INFO: (6) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 16.690267ms) Aug 30 16:31:17.570: INFO: (6) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 17.196126ms) Aug 30 16:31:17.570: INFO: (6) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 17.113643ms) Aug 30 16:31:17.570: INFO: (6) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 17.449715ms) Aug 30 16:31:17.570: INFO: (6) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 17.181518ms) Aug 30 16:31:17.577: INFO: (7) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 6.310436ms) Aug 30 16:31:17.577: INFO: (7) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.868052ms) Aug 30 16:31:17.578: INFO: (7) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 6.808513ms) Aug 30 16:31:17.578: INFO: (7) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 6.721647ms) Aug 30 16:31:17.578: INFO: (7) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: ... (200; 7.124158ms) Aug 30 16:31:17.578: INFO: (7) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 7.503888ms) Aug 30 16:31:17.578: INFO: (7) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 7.789278ms) Aug 30 16:31:17.579: INFO: (7) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 8.011165ms) Aug 30 16:31:17.579: INFO: (7) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 8.073814ms) Aug 30 16:31:17.579: INFO: (7) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 7.996083ms) Aug 30 16:31:17.579: INFO: (7) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 8.531246ms) Aug 30 16:31:17.579: INFO: (7) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 8.77713ms) Aug 30 16:31:17.581: INFO: (7) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 10.477757ms) Aug 30 16:31:17.582: INFO: (7) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 10.338251ms) Aug 30 16:31:17.589: INFO: (8) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 7.023484ms) Aug 30 16:31:17.589: INFO: (8) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 6.759071ms) Aug 30 16:31:17.589: INFO: (8) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 6.811746ms) Aug 30 16:31:17.589: INFO: (8) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 6.911303ms) Aug 30 16:31:17.589: INFO: (8) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 6.749364ms) Aug 30 16:31:17.589: INFO: (8) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 6.880427ms) Aug 30 16:31:17.589: INFO: (8) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 7.266338ms) Aug 30 16:31:17.590: INFO: (8) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: ... (200; 3.407245ms) Aug 30 16:31:17.595: INFO: (9) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 4.263143ms) Aug 30 16:31:17.596: INFO: (9) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 4.833382ms) Aug 30 16:31:17.596: INFO: (9) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 5.09677ms) Aug 30 16:31:17.596: INFO: (9) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.53158ms) Aug 30 16:31:17.597: INFO: (9) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 5.655802ms) Aug 30 16:31:17.597: INFO: (9) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 6.013158ms) Aug 30 16:31:17.597: INFO: (9) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 5.69822ms) Aug 30 16:31:17.597: INFO: (9) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test<... (200; 6.305411ms) Aug 30 16:31:17.624: INFO: (10) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 6.228771ms) Aug 30 16:31:17.624: INFO: (10) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 7.251416ms) Aug 30 16:31:17.624: INFO: (10) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 6.760305ms) Aug 30 16:31:17.624: INFO: (10) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 7.222431ms) Aug 30 16:31:17.625: INFO: (10) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 7.192301ms) Aug 30 16:31:17.625: INFO: (10) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 7.589626ms) Aug 30 16:31:17.625: INFO: (10) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 7.197356ms) Aug 30 16:31:17.625: INFO: (10) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 7.80551ms) Aug 30 16:31:17.625: INFO: (10) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 7.915744ms) Aug 30 16:31:17.625: INFO: (10) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 7.585456ms) Aug 30 16:31:17.625: INFO: (10) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 7.721691ms) Aug 30 16:31:17.626: INFO: (10) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 8.409286ms) Aug 30 16:31:17.636: INFO: (11) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 9.797742ms) Aug 30 16:31:17.636: INFO: (11) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 10.405446ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 10.249488ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 10.553738ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 10.568921ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 10.49563ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 10.672197ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 11.003988ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 11.157595ms) Aug 30 16:31:17.637: INFO: (11) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 11.438246ms) Aug 30 16:31:17.638: INFO: (11) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 11.52103ms) Aug 30 16:31:17.638: INFO: (11) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 11.731724ms) Aug 30 16:31:17.638: INFO: (11) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 11.449472ms) Aug 30 16:31:17.638: INFO: (11) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 11.7462ms) Aug 30 16:31:17.659: INFO: (12) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 20.210837ms) Aug 30 16:31:17.659: INFO: (12) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 20.824438ms) Aug 30 16:31:17.659: INFO: (12) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 21.000453ms) Aug 30 16:31:17.659: INFO: (12) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 20.76761ms) Aug 30 16:31:17.659: INFO: (12) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 21.086261ms) Aug 30 16:31:17.659: INFO: (12) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 21.473651ms) Aug 30 16:31:17.660: INFO: (12) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 21.815585ms) Aug 30 16:31:17.660: INFO: (12) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 21.935387ms) Aug 30 16:31:17.660: INFO: (12) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 21.794438ms) Aug 30 16:31:17.660: INFO: (12) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 22.227447ms) Aug 30 16:31:17.660: INFO: (12) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 22.049119ms) Aug 30 16:31:17.660: INFO: (12) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 22.490899ms) Aug 30 16:31:17.660: INFO: (12) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 22.380141ms) Aug 30 16:31:17.661: INFO: (12) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 23.310869ms) Aug 30 16:31:17.661: INFO: (12) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 23.118966ms) Aug 30 16:31:17.710: INFO: (13) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 48.472317ms) Aug 30 16:31:17.711: INFO: (13) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 49.085954ms) Aug 30 16:31:17.713: INFO: (13) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 51.066252ms) Aug 30 16:31:17.713: INFO: (13) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 51.402189ms) Aug 30 16:31:17.713: INFO: (13) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 51.406203ms) Aug 30 16:31:17.713: INFO: (13) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 51.772417ms) Aug 30 16:31:17.713: INFO: (13) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test<... (200; 51.577038ms) Aug 30 16:31:17.713: INFO: (13) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 51.841201ms) Aug 30 16:31:17.714: INFO: (13) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 52.184901ms) Aug 30 16:31:17.714: INFO: (13) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 52.088883ms) Aug 30 16:31:17.714: INFO: (13) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 52.147518ms) Aug 30 16:31:17.714: INFO: (13) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 51.825973ms) Aug 30 16:31:17.714: INFO: (13) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 52.3192ms) Aug 30 16:31:17.714: INFO: (13) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 52.360721ms) Aug 30 16:31:17.858: INFO: (14) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 143.538863ms) Aug 30 16:31:17.858: INFO: (14) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 142.961498ms) Aug 30 16:31:17.859: INFO: (14) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 143.577718ms) Aug 30 16:31:17.859: INFO: (14) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 144.87741ms) Aug 30 16:31:17.859: INFO: (14) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 144.450779ms) Aug 30 16:31:17.859: INFO: (14) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 145.000324ms) Aug 30 16:31:17.859: INFO: (14) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 144.894564ms) Aug 30 16:31:17.863: INFO: (14) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 147.728203ms) Aug 30 16:31:17.863: INFO: (14) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 148.803361ms) Aug 30 16:31:17.864: INFO: (14) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 148.669054ms) Aug 30 16:31:17.864: INFO: (14) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 149.522758ms) Aug 30 16:31:17.864: INFO: (14) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 149.499185ms) Aug 30 16:31:17.913: INFO: (14) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 197.600795ms) Aug 30 16:31:17.921: INFO: (15) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 7.503514ms) Aug 30 16:31:17.921: INFO: (15) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 7.710419ms) Aug 30 16:31:17.924: INFO: (15) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 9.51848ms) Aug 30 16:31:17.924: INFO: (15) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 9.555626ms) Aug 30 16:31:17.924: INFO: (15) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 10.844192ms) Aug 30 16:31:17.925: INFO: (15) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 10.524048ms) Aug 30 16:31:17.925: INFO: (15) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 10.998842ms) Aug 30 16:31:17.925: INFO: (15) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test<... (200; 11.833288ms) Aug 30 16:31:17.926: INFO: (15) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 11.655014ms) Aug 30 16:31:17.926: INFO: (15) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 12.358228ms) Aug 30 16:31:17.926: INFO: (15) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 12.005275ms) Aug 30 16:31:17.926: INFO: (15) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 12.964915ms) Aug 30 16:31:17.927: INFO: (15) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 13.235913ms) Aug 30 16:31:17.932: INFO: (16) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 4.893726ms) Aug 30 16:31:17.932: INFO: (16) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 5.073631ms) Aug 30 16:31:17.933: INFO: (16) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 5.656309ms) Aug 30 16:31:17.934: INFO: (16) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 6.937834ms) Aug 30 16:31:17.934: INFO: (16) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 6.629034ms) Aug 30 16:31:17.934: INFO: (16) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 6.720345ms) Aug 30 16:31:17.934: INFO: (16) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 6.847991ms) Aug 30 16:31:17.935: INFO: (16) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 7.198345ms) Aug 30 16:31:17.935: INFO: (16) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 7.275582ms) Aug 30 16:31:17.935: INFO: (16) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 7.702664ms) Aug 30 16:31:17.935: INFO: (16) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 8.103086ms) Aug 30 16:31:17.935: INFO: (16) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 8.195932ms) Aug 30 16:31:17.935: INFO: (16) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 7.673385ms) Aug 30 16:31:17.939: INFO: (17) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 3.590259ms) Aug 30 16:31:17.940: INFO: (17) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 3.982732ms) Aug 30 16:31:17.940: INFO: (17) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test<... (200; 5.532116ms) Aug 30 16:31:17.941: INFO: (17) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.512908ms) Aug 30 16:31:17.941: INFO: (17) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 5.719898ms) Aug 30 16:31:17.941: INFO: (17) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.608292ms) Aug 30 16:31:17.942: INFO: (17) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 5.992937ms) Aug 30 16:31:17.943: INFO: (17) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 6.951069ms) Aug 30 16:31:17.943: INFO: (17) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 7.514871ms) Aug 30 16:31:17.943: INFO: (17) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 7.967623ms) Aug 30 16:31:17.944: INFO: (17) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 7.978149ms) Aug 30 16:31:17.944: INFO: (17) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 8.254952ms) Aug 30 16:31:17.944: INFO: (17) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 8.082969ms) Aug 30 16:31:17.944: INFO: (17) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 8.411174ms) Aug 30 16:31:17.948: INFO: (18) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 3.518996ms) Aug 30 16:31:17.948: INFO: (18) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 3.463802ms) Aug 30 16:31:17.950: INFO: (18) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 5.159345ms) Aug 30 16:31:17.950: INFO: (18) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 5.254374ms) Aug 30 16:31:17.950: INFO: (18) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck/proxy/: test (200; 5.429137ms) Aug 30 16:31:17.950: INFO: (18) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 5.567019ms) Aug 30 16:31:17.950: INFO: (18) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 5.79268ms) Aug 30 16:31:17.951: INFO: (18) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname1/proxy/: foo (200; 5.811706ms) Aug 30 16:31:17.951: INFO: (18) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname1/proxy/: foo (200; 6.612944ms) Aug 30 16:31:17.951: INFO: (18) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:460/proxy/: tls baz (200; 6.518396ms) Aug 30 16:31:17.951: INFO: (18) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname2/proxy/: tls qux (200; 6.868476ms) Aug 30 16:31:17.951: INFO: (18) /api/v1/namespaces/proxy-9695/services/https:proxy-service-hsp47:tlsportname1/proxy/: tls baz (200; 7.133704ms) Aug 30 16:31:17.952: INFO: (18) /api/v1/namespaces/proxy-9695/services/http:proxy-service-hsp47:portname2/proxy/: bar (200; 7.198578ms) Aug 30 16:31:17.952: INFO: (18) /api/v1/namespaces/proxy-9695/services/proxy-service-hsp47:portname2/proxy/: bar (200; 6.959156ms) Aug 30 16:31:17.952: INFO: (18) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 7.057746ms) Aug 30 16:31:17.952: INFO: (18) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: test (200; 174.762997ms) Aug 30 16:31:18.127: INFO: (19) /api/v1/namespaces/proxy-9695/pods/http:proxy-service-hsp47-7s4ck:1080/proxy/: ... (200; 174.959636ms) Aug 30 16:31:18.127: INFO: (19) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:462/proxy/: tls qux (200; 174.543864ms) Aug 30 16:31:18.127: INFO: (19) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:162/proxy/: bar (200; 174.95141ms) Aug 30 16:31:18.127: INFO: (19) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:1080/proxy/: test<... (200; 174.678742ms) Aug 30 16:31:18.127: INFO: (19) /api/v1/namespaces/proxy-9695/pods/proxy-service-hsp47-7s4ck:160/proxy/: foo (200; 174.692133ms) Aug 30 16:31:18.127: INFO: (19) /api/v1/namespaces/proxy-9695/pods/https:proxy-service-hsp47-7s4ck:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 30 16:31:29.717: INFO: Waiting up to 5m0s for pod "pod-85b35752-3032-4735-adb0-c76f1457d987" in namespace "emptydir-6721" to be "success or failure" Aug 30 16:31:29.721: INFO: Pod "pod-85b35752-3032-4735-adb0-c76f1457d987": Phase="Pending", Reason="", readiness=false. Elapsed: 3.831593ms Aug 30 16:31:31.728: INFO: Pod "pod-85b35752-3032-4735-adb0-c76f1457d987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010542959s Aug 30 16:31:33.734: INFO: Pod "pod-85b35752-3032-4735-adb0-c76f1457d987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016473408s STEP: Saw pod success Aug 30 16:31:33.734: INFO: Pod "pod-85b35752-3032-4735-adb0-c76f1457d987" satisfied condition "success or failure" Aug 30 16:31:33.738: INFO: Trying to get logs from node iruya-worker2 pod pod-85b35752-3032-4735-adb0-c76f1457d987 container test-container: STEP: delete the pod Aug 30 16:31:33.815: INFO: Waiting for pod pod-85b35752-3032-4735-adb0-c76f1457d987 to disappear Aug 30 16:31:33.847: INFO: Pod pod-85b35752-3032-4735-adb0-c76f1457d987 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:31:33.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6721" for this suite. Aug 30 16:31:39.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:31:40.381: INFO: namespace emptydir-6721 deletion completed in 6.523628239s • [SLOW TEST:10.790 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:31:40.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 16:31:40.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39" in namespace "downward-api-2050" to be "success or failure" Aug 30 16:31:40.543: INFO: Pod "downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39": Phase="Pending", Reason="", readiness=false. Elapsed: 11.296086ms Aug 30 16:31:42.548: INFO: Pod "downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017210411s Aug 30 16:31:44.749: INFO: Pod "downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.217863963s STEP: Saw pod success Aug 30 16:31:44.749: INFO: Pod "downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39" satisfied condition "success or failure" Aug 30 16:31:44.754: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39 container client-container: STEP: delete the pod Aug 30 16:31:44.911: INFO: Waiting for pod downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39 to disappear Aug 30 16:31:44.972: INFO: Pod downwardapi-volume-d8197a67-0cf7-4ff9-b170-73d2f6b6cd39 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:31:44.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2050" for this suite. Aug 30 16:31:51.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:31:51.133: INFO: namespace downward-api-2050 deletion completed in 6.153378447s • [SLOW TEST:10.750 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:31:51.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8618cf5e-586a-4587-a75e-d805ea10b8ae STEP: Creating a pod to test consume secrets Aug 30 16:31:51.357: INFO: Waiting up to 5m0s for pod "pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058" in namespace "secrets-9001" to be "success or failure" Aug 30 16:31:51.390: INFO: Pod "pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058": Phase="Pending", Reason="", readiness=false. Elapsed: 31.855803ms Aug 30 16:31:53.397: INFO: Pod "pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038818558s Aug 30 16:31:55.407: INFO: Pod "pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04927266s STEP: Saw pod success Aug 30 16:31:55.407: INFO: Pod "pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058" satisfied condition "success or failure" Aug 30 16:31:55.411: INFO: Trying to get logs from node iruya-worker pod pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058 container secret-volume-test: STEP: delete the pod Aug 30 16:31:55.480: INFO: Waiting for pod pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058 to disappear Aug 30 16:31:55.493: INFO: Pod pod-secrets-98d983cb-dfcb-4fbc-b782-b104fd9fd058 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:31:55.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9001" for this suite. Aug 30 16:32:01.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:32:02.158: INFO: namespace secrets-9001 deletion completed in 6.65853462s STEP: Destroying namespace "secret-namespace-242" for this suite. Aug 30 16:32:08.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:32:08.399: INFO: namespace secret-namespace-242 deletion completed in 6.240285609s • [SLOW TEST:17.265 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:32:08.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4822 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4822 to expose endpoints map[] Aug 30 16:32:08.676: INFO: Get endpoints failed (4.447945ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Aug 30 16:32:09.683: INFO: successfully validated that service multi-endpoint-test in namespace services-4822 exposes endpoints map[] (1.011749789s elapsed) STEP: Creating pod pod1 in namespace services-4822 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4822 to expose endpoints map[pod1:[100]] Aug 30 16:32:13.866: INFO: successfully validated that service multi-endpoint-test in namespace services-4822 exposes endpoints map[pod1:[100]] (4.171713024s elapsed) STEP: Creating pod pod2 in namespace services-4822 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4822 to expose endpoints map[pod1:[100] pod2:[101]] Aug 30 16:32:18.231: INFO: successfully validated that service multi-endpoint-test in namespace services-4822 exposes endpoints map[pod1:[100] pod2:[101]] (4.354707551s elapsed) STEP: Deleting pod pod1 in namespace services-4822 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4822 to expose endpoints map[pod2:[101]] Aug 30 16:32:18.267: INFO: successfully validated that service multi-endpoint-test in namespace services-4822 exposes endpoints map[pod2:[101]] (14.071753ms elapsed) STEP: Deleting pod pod2 in namespace services-4822 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4822 to expose endpoints map[] Aug 30 16:32:18.285: INFO: successfully validated that service multi-endpoint-test in namespace services-4822 exposes endpoints map[] (9.238415ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:32:18.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4822" for this suite. Aug 30 16:32:24.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:32:24.960: INFO: namespace services-4822 deletion completed in 6.178902486s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.560 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:32:24.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-5bzt STEP: Creating a pod to test atomic-volume-subpath Aug 30 16:32:25.164: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5bzt" in namespace "subpath-8744" to be "success or failure" Aug 30 16:32:25.202: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Pending", Reason="", readiness=false. Elapsed: 38.204289ms Aug 30 16:32:27.316: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151854622s Aug 30 16:32:29.324: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159792941s Aug 30 16:32:31.331: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 6.167137775s Aug 30 16:32:33.339: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 8.174774514s Aug 30 16:32:35.347: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 10.182877549s Aug 30 16:32:37.353: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 12.189161912s Aug 30 16:32:39.359: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 14.195125769s Aug 30 16:32:41.365: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 16.200477623s Aug 30 16:32:43.375: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 18.211303428s Aug 30 16:32:45.382: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 20.217730481s Aug 30 16:32:47.389: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 22.225073565s Aug 30 16:32:49.394: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Running", Reason="", readiness=true. Elapsed: 24.229904694s Aug 30 16:32:51.400: INFO: Pod "pod-subpath-test-configmap-5bzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.235526369s STEP: Saw pod success Aug 30 16:32:51.400: INFO: Pod "pod-subpath-test-configmap-5bzt" satisfied condition "success or failure" Aug 30 16:32:51.404: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-5bzt container test-container-subpath-configmap-5bzt: STEP: delete the pod Aug 30 16:32:51.503: INFO: Waiting for pod pod-subpath-test-configmap-5bzt to disappear Aug 30 16:32:51.549: INFO: Pod pod-subpath-test-configmap-5bzt no longer exists STEP: Deleting pod pod-subpath-test-configmap-5bzt Aug 30 16:32:51.550: INFO: Deleting pod "pod-subpath-test-configmap-5bzt" in namespace "subpath-8744" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:32:51.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8744" for this suite. Aug 30 16:32:57.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:32:57.714: INFO: namespace subpath-8744 deletion completed in 6.153062974s • [SLOW TEST:32.752 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:32:57.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 16:32:57.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7" in namespace "projected-7808" to be "success or failure" Aug 30 16:32:57.802: INFO: Pod "downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.621575ms Aug 30 16:32:59.809: INFO: Pod "downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034113249s Aug 30 16:33:01.817: INFO: Pod "downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7": Phase="Running", Reason="", readiness=true. Elapsed: 4.041787107s Aug 30 16:33:03.825: INFO: Pod "downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049281998s STEP: Saw pod success Aug 30 16:33:03.825: INFO: Pod "downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7" satisfied condition "success or failure" Aug 30 16:33:03.830: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7 container client-container: STEP: delete the pod Aug 30 16:33:03.889: INFO: Waiting for pod downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7 to disappear Aug 30 16:33:03.895: INFO: Pod downwardapi-volume-88679694-af93-4b84-8e08-e167b1e25bf7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:33:03.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7808" for this suite. Aug 30 16:33:09.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:33:10.058: INFO: namespace projected-7808 deletion completed in 6.15582698s • [SLOW TEST:12.343 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:33:10.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-a684b60c-20af-4c34-8dcb-6b2e6feca0f0 STEP: Creating a pod to test consume secrets Aug 30 16:33:10.139: INFO: Waiting up to 5m0s for pod "pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1" in namespace "secrets-6504" to be "success or failure" Aug 30 16:33:10.153: INFO: Pod "pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.647145ms Aug 30 16:33:12.160: INFO: Pod "pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021368908s Aug 30 16:33:14.167: INFO: Pod "pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028124446s STEP: Saw pod success Aug 30 16:33:14.167: INFO: Pod "pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1" satisfied condition "success or failure" Aug 30 16:33:14.172: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1 container secret-volume-test: STEP: delete the pod Aug 30 16:33:14.203: INFO: Waiting for pod pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1 to disappear Aug 30 16:33:14.207: INFO: Pod pod-secrets-f20858c9-412f-40f9-bff5-dca833ccd8a1 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:33:14.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6504" for this suite. Aug 30 16:33:20.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:33:20.394: INFO: namespace secrets-6504 deletion completed in 6.162610746s • [SLOW TEST:10.335 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:33:20.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 16:33:20.609: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180" in namespace "downward-api-4240" to be "success or failure" Aug 30 16:33:20.631: INFO: Pod "downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180": Phase="Pending", Reason="", readiness=false. Elapsed: 21.213238ms Aug 30 16:33:22.709: INFO: Pod "downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0995163s Aug 30 16:33:24.720: INFO: Pod "downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110434022s STEP: Saw pod success Aug 30 16:33:24.720: INFO: Pod "downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180" satisfied condition "success or failure" Aug 30 16:33:24.728: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180 container client-container: STEP: delete the pod Aug 30 16:33:24.747: INFO: Waiting for pod downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180 to disappear Aug 30 16:33:24.777: INFO: Pod downwardapi-volume-d7824b5c-1210-45aa-b11b-5c837a186180 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:33:24.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4240" for this suite. Aug 30 16:33:31.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:33:31.444: INFO: namespace downward-api-4240 deletion completed in 6.657854393s • [SLOW TEST:11.049 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:33:31.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4403/configmap-test-92729648-e0d4-4fde-b45c-89befe63f3b0 STEP: Creating a pod to test consume configMaps Aug 30 16:33:31.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622" in namespace "configmap-4403" to be "success or failure" Aug 30 16:33:31.705: INFO: Pod "pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622": Phase="Pending", Reason="", readiness=false. Elapsed: 24.373453ms Aug 30 16:33:33.712: INFO: Pod "pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030957907s Aug 30 16:33:35.720: INFO: Pod "pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038573855s STEP: Saw pod success Aug 30 16:33:35.720: INFO: Pod "pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622" satisfied condition "success or failure" Aug 30 16:33:35.724: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622 container env-test: STEP: delete the pod Aug 30 16:33:35.775: INFO: Waiting for pod pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622 to disappear Aug 30 16:33:35.789: INFO: Pod pod-configmaps-7be9d8e5-6a18-4ae9-bb1d-c13985fc0622 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:33:35.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4403" for this suite. Aug 30 16:33:42.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:33:42.393: INFO: namespace configmap-4403 deletion completed in 6.595841305s • [SLOW TEST:10.940 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:33:42.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Aug 30 16:33:42.522: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:33:43.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7908" for this suite. Aug 30 16:33:49.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:33:50.144: INFO: namespace kubectl-7908 deletion completed in 6.336435977s • [SLOW TEST:7.749 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:33:50.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2a0189ea-d105-498b-9656-4462218f2105 STEP: Creating a pod to test consume configMaps Aug 30 16:33:50.249: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b" in namespace "projected-2940" to be "success or failure" Aug 30 16:33:50.259: INFO: Pod "pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.66683ms Aug 30 16:33:52.354: INFO: Pod "pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104298329s Aug 30 16:33:54.360: INFO: Pod "pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110013888s Aug 30 16:33:56.367: INFO: Pod "pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117680669s STEP: Saw pod success Aug 30 16:33:56.367: INFO: Pod "pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b" satisfied condition "success or failure" Aug 30 16:33:56.374: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b container projected-configmap-volume-test: STEP: delete the pod Aug 30 16:33:56.423: INFO: Waiting for pod pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b to disappear Aug 30 16:33:56.429: INFO: Pod pod-projected-configmaps-3143acaa-8cee-4914-8257-605d875ee64b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:33:56.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2940" for this suite. Aug 30 16:34:02.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:34:02.583: INFO: namespace projected-2940 deletion completed in 6.147079757s • [SLOW TEST:12.438 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:34:02.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Aug 30 16:34:02.670: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9256" to be "success or failure" Aug 30 16:34:02.697: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.632741ms Aug 30 16:34:04.703: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031969109s Aug 30 16:34:06.708: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037926232s Aug 30 16:34:08.716: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045647828s Aug 30 16:34:10.723: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052706748s STEP: Saw pod success Aug 30 16:34:10.723: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 30 16:34:10.728: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 30 16:34:10.767: INFO: Waiting for pod pod-host-path-test to disappear Aug 30 16:34:10.778: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:34:10.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9256" for this suite. Aug 30 16:34:16.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:34:16.990: INFO: namespace hostpath-9256 deletion completed in 6.202460657s • [SLOW TEST:14.402 seconds] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:34:16.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:34:22.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6018" for this suite. Aug 30 16:34:28.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:34:28.854: INFO: namespace watch-6018 deletion completed in 6.239527742s • [SLOW TEST:11.864 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:34:28.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Aug 30 16:34:28.949: INFO: Waiting up to 5m0s for pod "var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f" in namespace "var-expansion-1785" to be "success or failure" Aug 30 16:34:28.985: INFO: Pod "var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.353218ms Aug 30 16:34:30.992: INFO: Pod "var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043171512s Aug 30 16:34:32.999: INFO: Pod "var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f": Phase="Running", Reason="", readiness=true. Elapsed: 4.050338767s Aug 30 16:34:35.007: INFO: Pod "var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058160219s STEP: Saw pod success Aug 30 16:34:35.007: INFO: Pod "var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f" satisfied condition "success or failure" Aug 30 16:34:35.012: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f container dapi-container: STEP: delete the pod Aug 30 16:34:35.046: INFO: Waiting for pod var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f to disappear Aug 30 16:34:35.054: INFO: Pod var-expansion-cdb04c30-1a69-4619-aa22-76854da7743f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:34:35.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1785" for this suite. Aug 30 16:34:41.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:34:41.253: INFO: namespace var-expansion-1785 deletion completed in 6.190916827s • [SLOW TEST:12.396 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:34:41.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7448 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-7448 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7448 Aug 30 16:34:41.439: INFO: Found 0 stateful pods, waiting for 1 Aug 30 16:34:51.449: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 30 16:34:51.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 30 16:34:53.031: INFO: stderr: "I0830 16:34:52.874225 473 log.go:172] (0x40005c20b0) (0x40006a68c0) Create stream\nI0830 16:34:52.877106 473 log.go:172] (0x40005c20b0) (0x40006a68c0) Stream added, broadcasting: 1\nI0830 16:34:52.892174 473 log.go:172] (0x40005c20b0) Reply frame received for 1\nI0830 16:34:52.892833 473 log.go:172] (0x40005c20b0) (0x4000798000) Create stream\nI0830 16:34:52.892898 473 log.go:172] (0x40005c20b0) (0x4000798000) Stream added, broadcasting: 3\nI0830 16:34:52.894610 473 log.go:172] (0x40005c20b0) Reply frame received for 3\nI0830 16:34:52.895203 473 log.go:172] (0x40005c20b0) (0x400048a000) Create stream\nI0830 16:34:52.895332 473 log.go:172] (0x40005c20b0) (0x400048a000) Stream added, broadcasting: 5\nI0830 16:34:52.897128 473 log.go:172] (0x40005c20b0) Reply frame received for 5\nI0830 16:34:52.959205 473 log.go:172] (0x40005c20b0) Data frame received for 5\nI0830 16:34:52.959519 473 log.go:172] (0x400048a000) (5) Data frame handling\nI0830 16:34:52.960305 473 log.go:172] (0x400048a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 16:34:53.007206 473 log.go:172] (0x40005c20b0) Data frame received for 5\nI0830 16:34:53.007409 473 log.go:172] (0x400048a000) (5) Data frame handling\nI0830 16:34:53.007677 473 log.go:172] (0x40005c20b0) Data frame received for 3\nI0830 16:34:53.007851 473 log.go:172] (0x4000798000) (3) Data frame handling\nI0830 16:34:53.007990 473 log.go:172] (0x4000798000) (3) Data frame sent\nI0830 16:34:53.008109 473 log.go:172] (0x40005c20b0) Data frame received for 3\nI0830 16:34:53.008230 473 log.go:172] (0x4000798000) (3) Data frame handling\nI0830 16:34:53.008833 473 log.go:172] (0x40005c20b0) Data frame received for 1\nI0830 16:34:53.008957 473 log.go:172] (0x40006a68c0) (1) Data frame handling\nI0830 16:34:53.009064 473 log.go:172] (0x40006a68c0) (1) Data frame sent\nI0830 16:34:53.009677 473 log.go:172] (0x40005c20b0) (0x40006a68c0) Stream removed, broadcasting: 1\nI0830 16:34:53.014141 473 log.go:172] (0x40005c20b0) Go away received\nI0830 16:34:53.016614 473 log.go:172] (0x40005c20b0) (0x40006a68c0) Stream removed, broadcasting: 1\nI0830 16:34:53.017169 473 log.go:172] (0x40005c20b0) (0x4000798000) Stream removed, broadcasting: 3\nI0830 16:34:53.017825 473 log.go:172] (0x40005c20b0) (0x400048a000) Stream removed, broadcasting: 5\n" Aug 30 16:34:53.032: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 30 16:34:53.033: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 30 16:34:53.039: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 30 16:35:03.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 30 16:35:03.049: INFO: Waiting for statefulset status.replicas updated to 0 Aug 30 16:35:03.079: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:03.081: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:03.083: INFO: Aug 30 16:35:03.083: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 30 16:35:04.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984782446s Aug 30 16:35:05.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.976488306s Aug 30 16:35:06.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.662324294s Aug 30 16:35:07.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.63263326s Aug 30 16:35:08.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.590012551s Aug 30 16:35:09.512: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.578567637s Aug 30 16:35:10.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.555488214s Aug 30 16:35:11.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.505913073s Aug 30 16:35:12.581: INFO: Verifying statefulset ss doesn't scale past 3 for another 497.475612ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7448 Aug 30 16:35:13.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:35:15.047: INFO: stderr: "I0830 16:35:14.939907 497 log.go:172] (0x40006de630) (0x40005286e0) Create stream\nI0830 16:35:14.943621 497 log.go:172] (0x40006de630) (0x40005286e0) Stream added, broadcasting: 1\nI0830 16:35:14.955038 497 log.go:172] (0x40006de630) Reply frame received for 1\nI0830 16:35:14.956135 497 log.go:172] (0x40006de630) (0x40008d0000) Create stream\nI0830 16:35:14.956279 497 log.go:172] (0x40006de630) (0x40008d0000) Stream added, broadcasting: 3\nI0830 16:35:14.958580 497 log.go:172] (0x40006de630) Reply frame received for 3\nI0830 16:35:14.959152 497 log.go:172] (0x40006de630) (0x40008d00a0) Create stream\nI0830 16:35:14.959282 497 log.go:172] (0x40006de630) (0x40008d00a0) Stream added, broadcasting: 5\nI0830 16:35:14.961359 497 log.go:172] (0x40006de630) Reply frame received for 5\nI0830 16:35:15.021295 497 log.go:172] (0x40006de630) Data frame received for 3\nI0830 16:35:15.021540 497 log.go:172] (0x40006de630) Data frame received for 5\nI0830 16:35:15.021612 497 log.go:172] (0x40008d0000) (3) Data frame handling\nI0830 16:35:15.021784 497 log.go:172] (0x40008d00a0) (5) Data frame handling\nI0830 16:35:15.021970 497 log.go:172] (0x40006de630) Data frame received for 1\nI0830 16:35:15.022042 497 log.go:172] (0x40005286e0) (1) Data frame handling\nI0830 16:35:15.022880 497 log.go:172] (0x40008d0000) (3) Data frame sent\nI0830 16:35:15.022947 497 log.go:172] (0x40008d00a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0830 16:35:15.023116 497 log.go:172] (0x40005286e0) (1) Data frame sent\nI0830 16:35:15.023654 497 log.go:172] (0x40006de630) Data frame received for 3\nI0830 16:35:15.023713 497 log.go:172] (0x40008d0000) (3) Data frame handling\nI0830 16:35:15.023811 497 log.go:172] (0x40006de630) Data frame received for 5\nI0830 16:35:15.023916 497 log.go:172] (0x40006de630) (0x40005286e0) Stream removed, broadcasting: 1\nI0830 16:35:15.024254 497 log.go:172] (0x40008d00a0) (5) Data frame handling\nI0830 16:35:15.028178 497 log.go:172] (0x40006de630) Go away received\nI0830 16:35:15.035680 497 log.go:172] (0x40006de630) (0x40005286e0) Stream removed, broadcasting: 1\nI0830 16:35:15.036029 497 log.go:172] (0x40006de630) (0x40008d0000) Stream removed, broadcasting: 3\nI0830 16:35:15.036249 497 log.go:172] (0x40006de630) (0x40008d00a0) Stream removed, broadcasting: 5\n" Aug 30 16:35:15.048: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 30 16:35:15.048: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 30 16:35:15.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:35:16.598: INFO: stderr: "I0830 16:35:16.474065 520 log.go:172] (0x40006bc000) (0x4000632140) Create stream\nI0830 16:35:16.476812 520 log.go:172] (0x40006bc000) (0x4000632140) Stream added, broadcasting: 1\nI0830 16:35:16.487461 520 log.go:172] (0x40006bc000) Reply frame received for 1\nI0830 16:35:16.488191 520 log.go:172] (0x40006bc000) (0x40007ee000) Create stream\nI0830 16:35:16.488262 520 log.go:172] (0x40006bc000) (0x40007ee000) Stream added, broadcasting: 3\nI0830 16:35:16.489776 520 log.go:172] (0x40006bc000) Reply frame received for 3\nI0830 16:35:16.489979 520 log.go:172] (0x40006bc000) (0x40006321e0) Create stream\nI0830 16:35:16.490026 520 log.go:172] (0x40006bc000) (0x40006321e0) Stream added, broadcasting: 5\nI0830 16:35:16.490916 520 log.go:172] (0x40006bc000) Reply frame received for 5\nI0830 16:35:16.578933 520 log.go:172] (0x40006bc000) Data frame received for 3\nI0830 16:35:16.579155 520 log.go:172] (0x40006bc000) Data frame received for 5\nI0830 16:35:16.579318 520 log.go:172] (0x40006bc000) Data frame received for 1\nI0830 16:35:16.579400 520 log.go:172] (0x4000632140) (1) Data frame handling\nI0830 16:35:16.579481 520 log.go:172] (0x40006321e0) (5) Data frame handling\nI0830 16:35:16.579685 520 log.go:172] (0x40007ee000) (3) Data frame handling\nI0830 16:35:16.580880 520 log.go:172] (0x4000632140) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0830 16:35:16.581407 520 log.go:172] (0x40006321e0) (5) Data frame sent\nI0830 16:35:16.581484 520 log.go:172] (0x40007ee000) (3) Data frame sent\nI0830 16:35:16.581546 520 log.go:172] (0x40006bc000) Data frame received for 5\nI0830 16:35:16.581618 520 log.go:172] (0x40006321e0) (5) Data frame handling\nI0830 16:35:16.581675 520 log.go:172] (0x40006bc000) Data frame received for 3\nI0830 16:35:16.581726 520 log.go:172] (0x40007ee000) (3) Data frame handling\nI0830 16:35:16.582889 520 log.go:172] (0x40006bc000) (0x4000632140) Stream removed, broadcasting: 1\nI0830 16:35:16.585074 520 log.go:172] (0x40006bc000) Go away received\nI0830 16:35:16.587932 520 log.go:172] (0x40006bc000) (0x4000632140) Stream removed, broadcasting: 1\nI0830 16:35:16.588486 520 log.go:172] (0x40006bc000) (0x40007ee000) Stream removed, broadcasting: 3\nI0830 16:35:16.588649 520 log.go:172] (0x40006bc000) (0x40006321e0) Stream removed, broadcasting: 5\n" Aug 30 16:35:16.600: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 30 16:35:16.600: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 30 16:35:16.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:35:18.078: INFO: stderr: "I0830 16:35:17.965201 542 log.go:172] (0x4000950630) (0x4000906780) Create stream\nI0830 16:35:17.967589 542 log.go:172] (0x4000950630) (0x4000906780) Stream added, broadcasting: 1\nI0830 16:35:17.985556 542 log.go:172] (0x4000950630) Reply frame received for 1\nI0830 16:35:17.986698 542 log.go:172] (0x4000950630) (0x40009a4000) Create stream\nI0830 16:35:17.986816 542 log.go:172] (0x4000950630) (0x40009a4000) Stream added, broadcasting: 3\nI0830 16:35:17.988351 542 log.go:172] (0x4000950630) Reply frame received for 3\nI0830 16:35:17.988608 542 log.go:172] (0x4000950630) (0x4000906000) Create stream\nI0830 16:35:17.988676 542 log.go:172] (0x4000950630) (0x4000906000) Stream added, broadcasting: 5\nI0830 16:35:17.989912 542 log.go:172] (0x4000950630) Reply frame received for 5\nI0830 16:35:18.057390 542 log.go:172] (0x4000950630) Data frame received for 3\nI0830 16:35:18.057584 542 log.go:172] (0x4000950630) Data frame received for 1\nI0830 16:35:18.057899 542 log.go:172] (0x4000950630) Data frame received for 5\nI0830 16:35:18.058184 542 log.go:172] (0x40009a4000) (3) Data frame handling\nI0830 16:35:18.058262 542 log.go:172] (0x4000906780) (1) Data frame handling\nI0830 16:35:18.058462 542 log.go:172] (0x4000906000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0830 16:35:18.060351 542 log.go:172] (0x4000906000) (5) Data frame sent\nI0830 16:35:18.060599 542 log.go:172] (0x40009a4000) (3) Data frame sent\nI0830 16:35:18.060690 542 log.go:172] (0x4000950630) Data frame received for 5\nI0830 16:35:18.060920 542 log.go:172] (0x4000906000) (5) Data frame handling\nI0830 16:35:18.061070 542 log.go:172] (0x4000950630) Data frame received for 3\nI0830 16:35:18.061158 542 log.go:172] (0x4000906780) (1) Data frame sent\nI0830 16:35:18.061300 542 log.go:172] (0x40009a4000) (3) Data frame handling\nI0830 16:35:18.062159 542 log.go:172] (0x4000950630) (0x4000906780) Stream removed, broadcasting: 1\nI0830 16:35:18.063804 542 log.go:172] (0x4000950630) Go away received\nI0830 16:35:18.066741 542 log.go:172] (0x4000950630) (0x4000906780) Stream removed, broadcasting: 1\nI0830 16:35:18.067190 542 log.go:172] (0x4000950630) (0x40009a4000) Stream removed, broadcasting: 3\nI0830 16:35:18.067440 542 log.go:172] (0x4000950630) (0x4000906000) Stream removed, broadcasting: 5\n" Aug 30 16:35:18.079: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 30 16:35:18.079: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 30 16:35:18.087: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 30 16:35:18.088: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 30 16:35:18.088: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 30 16:35:18.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 30 16:35:19.604: INFO: stderr: "I0830 16:35:19.486855 566 log.go:172] (0x4000a18420) (0x4000780640) Create stream\nI0830 16:35:19.489731 566 log.go:172] (0x4000a18420) (0x4000780640) Stream added, broadcasting: 1\nI0830 16:35:19.500070 566 log.go:172] (0x4000a18420) Reply frame received for 1\nI0830 16:35:19.500650 566 log.go:172] (0x4000a18420) (0x40005b0280) Create stream\nI0830 16:35:19.500785 566 log.go:172] (0x4000a18420) (0x40005b0280) Stream added, broadcasting: 3\nI0830 16:35:19.502405 566 log.go:172] (0x4000a18420) Reply frame received for 3\nI0830 16:35:19.502725 566 log.go:172] (0x4000a18420) (0x400065e000) Create stream\nI0830 16:35:19.502832 566 log.go:172] (0x4000a18420) (0x400065e000) Stream added, broadcasting: 5\nI0830 16:35:19.504186 566 log.go:172] (0x4000a18420) Reply frame received for 5\nI0830 16:35:19.588405 566 log.go:172] (0x4000a18420) Data frame received for 3\nI0830 16:35:19.588871 566 log.go:172] (0x4000a18420) Data frame received for 5\nI0830 16:35:19.588974 566 log.go:172] (0x400065e000) (5) Data frame handling\nI0830 16:35:19.589089 566 log.go:172] (0x40005b0280) (3) Data frame handling\nI0830 16:35:19.589253 566 log.go:172] (0x4000a18420) Data frame received for 1\nI0830 16:35:19.589333 566 log.go:172] (0x4000780640) (1) Data frame handling\nI0830 16:35:19.589990 566 log.go:172] (0x400065e000) (5) Data frame sent\nI0830 16:35:19.590346 566 log.go:172] (0x4000780640) (1) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 16:35:19.590690 566 log.go:172] (0x4000a18420) Data frame received for 5\nI0830 16:35:19.590766 566 log.go:172] (0x400065e000) (5) Data frame handling\nI0830 16:35:19.591021 566 log.go:172] (0x40005b0280) (3) Data frame sent\nI0830 16:35:19.591133 566 log.go:172] (0x4000a18420) Data frame received for 3\nI0830 16:35:19.591182 566 log.go:172] (0x40005b0280) (3) Data frame handling\nI0830 16:35:19.592918 566 log.go:172] (0x4000a18420) (0x4000780640) Stream removed, broadcasting: 1\nI0830 16:35:19.593190 566 log.go:172] (0x4000a18420) Go away received\nI0830 16:35:19.595627 566 log.go:172] (0x4000a18420) (0x4000780640) Stream removed, broadcasting: 1\nI0830 16:35:19.595820 566 log.go:172] (0x4000a18420) (0x40005b0280) Stream removed, broadcasting: 3\nI0830 16:35:19.595950 566 log.go:172] (0x4000a18420) (0x400065e000) Stream removed, broadcasting: 5\n" Aug 30 16:35:19.605: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 30 16:35:19.605: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 30 16:35:19.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 30 16:35:21.189: INFO: stderr: "I0830 16:35:21.056808 589 log.go:172] (0x4000128dc0) (0x40003ce6e0) Create stream\nI0830 16:35:21.062051 589 log.go:172] (0x4000128dc0) (0x40003ce6e0) Stream added, broadcasting: 1\nI0830 16:35:21.074471 589 log.go:172] (0x4000128dc0) Reply frame received for 1\nI0830 16:35:21.075296 589 log.go:172] (0x4000128dc0) (0x400093c000) Create stream\nI0830 16:35:21.075386 589 log.go:172] (0x4000128dc0) (0x400093c000) Stream added, broadcasting: 3\nI0830 16:35:21.077112 589 log.go:172] (0x4000128dc0) Reply frame received for 3\nI0830 16:35:21.077644 589 log.go:172] (0x4000128dc0) (0x400093c0a0) Create stream\nI0830 16:35:21.077745 589 log.go:172] (0x4000128dc0) (0x400093c0a0) Stream added, broadcasting: 5\nI0830 16:35:21.083324 589 log.go:172] (0x4000128dc0) Reply frame received for 5\nI0830 16:35:21.134798 589 log.go:172] (0x4000128dc0) Data frame received for 5\nI0830 16:35:21.135191 589 log.go:172] (0x400093c0a0) (5) Data frame handling\nI0830 16:35:21.136142 589 log.go:172] (0x400093c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 16:35:21.160514 589 log.go:172] (0x4000128dc0) Data frame received for 3\nI0830 16:35:21.160679 589 log.go:172] (0x400093c000) (3) Data frame handling\nI0830 16:35:21.161115 589 log.go:172] (0x4000128dc0) Data frame received for 5\nI0830 16:35:21.161371 589 log.go:172] (0x400093c0a0) (5) Data frame handling\nI0830 16:35:21.161471 589 log.go:172] (0x400093c000) (3) Data frame sent\nI0830 16:35:21.161604 589 log.go:172] (0x4000128dc0) Data frame received for 3\nI0830 16:35:21.161734 589 log.go:172] (0x400093c000) (3) Data frame handling\nI0830 16:35:21.162815 589 log.go:172] (0x4000128dc0) Data frame received for 1\nI0830 16:35:21.162902 589 log.go:172] (0x40003ce6e0) (1) Data frame handling\nI0830 16:35:21.162994 589 log.go:172] (0x40003ce6e0) (1) Data frame sent\nI0830 16:35:21.165047 589 log.go:172] (0x4000128dc0) (0x40003ce6e0) Stream removed, broadcasting: 1\nI0830 16:35:21.168299 589 log.go:172] (0x4000128dc0) Go away received\nI0830 16:35:21.172656 589 log.go:172] (0x4000128dc0) (0x40003ce6e0) Stream removed, broadcasting: 1\nI0830 16:35:21.173686 589 log.go:172] (0x4000128dc0) (0x400093c000) Stream removed, broadcasting: 3\nI0830 16:35:21.173982 589 log.go:172] (0x4000128dc0) (0x400093c0a0) Stream removed, broadcasting: 5\n" Aug 30 16:35:21.190: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 30 16:35:21.190: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 30 16:35:21.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 30 16:35:22.798: INFO: stderr: "I0830 16:35:22.644574 612 log.go:172] (0x4000136e70) (0x40008ba640) Create stream\nI0830 16:35:22.650298 612 log.go:172] (0x4000136e70) (0x40008ba640) Stream added, broadcasting: 1\nI0830 16:35:22.661431 612 log.go:172] (0x4000136e70) Reply frame received for 1\nI0830 16:35:22.662036 612 log.go:172] (0x4000136e70) (0x4000982000) Create stream\nI0830 16:35:22.662131 612 log.go:172] (0x4000136e70) (0x4000982000) Stream added, broadcasting: 3\nI0830 16:35:22.663426 612 log.go:172] (0x4000136e70) Reply frame received for 3\nI0830 16:35:22.663676 612 log.go:172] (0x4000136e70) (0x40009820a0) Create stream\nI0830 16:35:22.663757 612 log.go:172] (0x4000136e70) (0x40009820a0) Stream added, broadcasting: 5\nI0830 16:35:22.665107 612 log.go:172] (0x4000136e70) Reply frame received for 5\nI0830 16:35:22.748918 612 log.go:172] (0x4000136e70) Data frame received for 5\nI0830 16:35:22.749151 612 log.go:172] (0x40009820a0) (5) Data frame handling\nI0830 16:35:22.749522 612 log.go:172] (0x40009820a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 16:35:22.778713 612 log.go:172] (0x4000136e70) Data frame received for 3\nI0830 16:35:22.778841 612 log.go:172] (0x4000982000) (3) Data frame handling\nI0830 16:35:22.778909 612 log.go:172] (0x4000982000) (3) Data frame sent\nI0830 16:35:22.778981 612 log.go:172] (0x4000136e70) Data frame received for 5\nI0830 16:35:22.779074 612 log.go:172] (0x40009820a0) (5) Data frame handling\nI0830 16:35:22.779162 612 log.go:172] (0x4000136e70) Data frame received for 3\nI0830 16:35:22.779254 612 log.go:172] (0x4000982000) (3) Data frame handling\nI0830 16:35:22.780170 612 log.go:172] (0x4000136e70) Data frame received for 1\nI0830 16:35:22.780244 612 log.go:172] (0x40008ba640) (1) Data frame handling\nI0830 16:35:22.780313 612 log.go:172] (0x40008ba640) (1) Data frame sent\nI0830 16:35:22.780919 612 log.go:172] (0x4000136e70) (0x40008ba640) Stream removed, broadcasting: 1\nI0830 16:35:22.783057 612 log.go:172] (0x4000136e70) Go away received\nI0830 16:35:22.785735 612 log.go:172] (0x4000136e70) (0x40008ba640) Stream removed, broadcasting: 1\nI0830 16:35:22.786228 612 log.go:172] (0x4000136e70) (0x4000982000) Stream removed, broadcasting: 3\nI0830 16:35:22.786377 612 log.go:172] (0x4000136e70) (0x40009820a0) Stream removed, broadcasting: 5\n" Aug 30 16:35:22.799: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 30 16:35:22.799: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 30 16:35:22.799: INFO: Waiting for statefulset status.replicas updated to 0 Aug 30 16:35:22.804: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 30 16:35:32.815: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 30 16:35:32.815: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 30 16:35:32.815: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 30 16:35:32.834: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:32.834: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:32.835: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:32.835: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:32.835: INFO: Aug 30 16:35:32.835: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 30 16:35:33.843: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:33.843: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:33.844: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:33.844: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:33.844: INFO: Aug 30 16:35:33.844: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 30 16:35:34.853: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:34.854: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:34.854: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:34.854: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:34.855: INFO: Aug 30 16:35:34.855: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 30 16:35:35.862: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:35.862: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:35.862: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:35.863: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:35.863: INFO: Aug 30 16:35:35.863: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 30 16:35:36.872: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:36.872: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:36.873: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:36.873: INFO: Aug 30 16:35:36.873: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 30 16:35:37.881: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:37.881: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:37.882: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:37.882: INFO: Aug 30 16:35:37.882: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 30 16:35:38.891: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:38.891: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:38.892: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:38.892: INFO: Aug 30 16:35:38.892: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 30 16:35:39.901: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:39.901: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:39.902: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:39.902: INFO: Aug 30 16:35:39.902: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 30 16:35:40.910: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:40.911: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:40.911: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:40.911: INFO: Aug 30 16:35:40.911: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 30 16:35:41.919: INFO: POD NODE PHASE GRACE CONDITIONS Aug 30 16:35:41.919: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:34:41 +0000 UTC }] Aug 30 16:35:41.919: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 16:35:03 +0000 UTC }] Aug 30 16:35:41.920: INFO: Aug 30 16:35:41.920: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7448 Aug 30 16:35:42.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:35:44.179: INFO: rc: 1 Aug 30 16:35:44.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400346d6e0 exit status 1 true [0x40026267f8 0x4002626810 0x4002626828] [0x40026267f8 0x4002626810 0x4002626828] [0x4002626808 0x4002626820] [0xad5158 0xad5158] 0x4002ae93e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:35:54.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:35:55.481: INFO: rc: 1 Aug 30 16:35:55.482: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4002c24570 exit status 1 true [0x400267ce88 0x400267ceb0 0x400267cef0] [0x400267ce88 0x400267ceb0 0x400267cef0] [0x400267cea8 0x400267ced8] [0xad5158 0xad5158] 0x4002adcba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:36:05.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:36:06.779: INFO: rc: 1 Aug 30 16:36:06.779: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4002c24630 exit status 1 true [0x400267cef8 0x400267cf28 0x400267cf60] [0x400267cef8 0x400267cf28 0x400267cf60] [0x400267cf08 0x400267cf48] [0xad5158 0xad5158] 0x4002adcf00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:36:16.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:36:18.045: INFO: rc: 1 Aug 30 16:36:18.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4002c24720 exit status 1 true [0x400267cf80 0x400267cfa8 0x400267cfc0] [0x400267cf80 0x400267cfa8 0x400267cfc0] [0x400267cfa0 0x400267cfb8] [0xad5158 0xad5158] 0x4002add260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:36:28.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:36:29.366: INFO: rc: 1 Aug 30 16:36:29.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb0120 exit status 1 true [0x4000a06228 0x4000a06430 0x4000a06a30] [0x4000a06228 0x4000a06430 0x4000a06a30] [0x4000a06328 0x4000a06778] [0xad5158 0xad5158] 0x40032fc3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:36:39.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:36:40.639: INFO: rc: 1 Aug 30 16:36:40.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x40033960c0 exit status 1 true [0x4000010220 0x40000105c0 0x4000010978] [0x4000010220 0x40000105c0 0x4000010978] [0x4000010388 0x40000106c8] [0xad5158 0xad5158] 0x4001ac4840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:36:50.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:36:51.943: INFO: rc: 1 Aug 30 16:36:51.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4003396180 exit status 1 true [0x4000010f08 0x40000112e8 0x40000115d0] [0x4000010f08 0x40000112e8 0x40000115d0] [0x40000111b8 0x4000011580] [0xad5158 0xad5158] 0x4001ac4ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:37:01.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:37:03.270: INFO: rc: 1 Aug 30 16:37:03.271: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4003396240 exit status 1 true [0x4000011610 0x4000011660 0x40000117a0] [0x4000011610 0x4000011660 0x40000117a0] [0x4000011650 0x4000011790] [0xad5158 0xad5158] 0x4001ac4f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:37:13.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:37:14.533: INFO: rc: 1 Aug 30 16:37:14.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400286a0c0 exit status 1 true [0x40000ec508 0x4000671110 0x4000671368] [0x40000ec508 0x4000671110 0x4000671368] [0x40006710c8 0x40006712a8] [0xad5158 0xad5158] 0x4000d50420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:37:24.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:37:25.811: INFO: rc: 1 Aug 30 16:37:25.811: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4002f800f0 exit status 1 true [0x4001b9c020 0x4001b9c128 0x4001b9c1d0] [0x4001b9c020 0x4001b9c128 0x4001b9c1d0] [0x4001b9c108 0x4001b9c1b0] [0xad5158 0xad5158] 0x40037a02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:37:35.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:37:37.104: INFO: rc: 1 Aug 30 16:37:37.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4002f801b0 exit status 1 true [0x4001b9c200 0x4001b9c2d0 0x4001b9c4a0] [0x4001b9c200 0x4001b9c2d0 0x4001b9c4a0] [0x4001b9c220 0x4001b9c498] [0xad5158 0xad5158] 0x40037a0600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:37:47.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:37:48.370: INFO: rc: 1 Aug 30 16:37:48.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4003396330 exit status 1 true [0x40000117c0 0x4000011908 0x4000011940] [0x40000117c0 0x4000011908 0x4000011940] [0x4000011878 0x4000011928] [0xad5158 0xad5158] 0x4001ac5260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:37:58.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:37:59.650: INFO: rc: 1 Aug 30 16:37:59.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb0240 exit status 1 true [0x4000a06b20 0x4000a07070 0x4000a07678] [0x4000a06b20 0x4000a07070 0x4000a07678] [0x4000a06f18 0x4000a07600] [0xad5158 0xad5158] 0x40032fc720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:38:09.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:38:10.937: INFO: rc: 1 Aug 30 16:38:10.938: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb0330 exit status 1 true [0x4000a076d0 0x4000a079a8 0x4000a07be8] [0x4000a076d0 0x4000a079a8 0x4000a07be8] [0x4000a07790 0x4000a07b30] [0xad5158 0xad5158] 0x40032fca80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:38:20.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:38:22.276: INFO: rc: 1 Aug 30 16:38:22.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400286a090 exit status 1 true [0x40000ec508 0x40006711b0 0x40006713a8] [0x40000ec508 0x40006711b0 0x40006713a8] [0x4000671110 0x4000671368] [0xad5158 0xad5158] 0x4000d50420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:38:32.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:38:33.552: INFO: rc: 1 Aug 30 16:38:33.553: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x40033960f0 exit status 1 true [0x4000010220 0x40000105c0 0x4000010978] [0x4000010220 0x40000105c0 0x4000010978] [0x4000010388 0x40000106c8] [0xad5158 0xad5158] 0x4001ac4840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:38:43.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:38:44.837: INFO: rc: 1 Aug 30 16:38:44.837: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x40033961e0 exit status 1 true [0x4000010f08 0x40000112e8 0x40000115d0] [0x4000010f08 0x40000112e8 0x40000115d0] [0x40000111b8 0x4000011580] [0xad5158 0xad5158] 0x4001ac4ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:38:54.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:39:00.644: INFO: rc: 1 Aug 30 16:39:00.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400286a1b0 exit status 1 true [0x4000671400 0x4000671478 0x4000671510] [0x4000671400 0x4000671478 0x4000671510] [0x4000671458 0x4000671500] [0xad5158 0xad5158] 0x4000d50a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:39:10.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:39:12.017: INFO: rc: 1 Aug 30 16:39:12.017: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb00f0 exit status 1 true [0x4001b9c020 0x4001b9c128 0x4001b9c1d0] [0x4001b9c020 0x4001b9c128 0x4001b9c1d0] [0x4001b9c108 0x4001b9c1b0] [0xad5158 0xad5158] 0x40037a02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:39:22.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:39:23.340: INFO: rc: 1 Aug 30 16:39:23.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4003396300 exit status 1 true [0x4000011610 0x4000011660 0x40000117a0] [0x4000011610 0x4000011660 0x40000117a0] [0x4000011650 0x4000011790] [0xad5158 0xad5158] 0x4001ac4f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:39:33.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:39:34.630: INFO: rc: 1 Aug 30 16:39:34.631: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400286a2a0 exit status 1 true [0x4000671550 0x4000671588 0x4000671ad8] [0x4000671550 0x4000671588 0x4000671ad8] [0x4000671578 0x4000671a90] [0xad5158 0xad5158] 0x4000d50e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:39:44.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:39:45.906: INFO: rc: 1 Aug 30 16:39:45.906: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb0270 exit status 1 true [0x4001b9c200 0x4001b9c2d0 0x4001b9c4a0] [0x4001b9c200 0x4001b9c2d0 0x4001b9c4a0] [0x4001b9c220 0x4001b9c498] [0xad5158 0xad5158] 0x40037a0600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:39:55.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:39:57.214: INFO: rc: 1 Aug 30 16:39:57.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb0360 exit status 1 true [0x4001b9c4b0 0x4001b9c5c8 0x4001b9c6d8] [0x4001b9c4b0 0x4001b9c5c8 0x4001b9c6d8] [0x4001b9c520 0x4001b9c6b8] [0xad5158 0xad5158] 0x40037a0960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:40:07.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:40:08.524: INFO: rc: 1 Aug 30 16:40:08.524: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb0450 exit status 1 true [0x4001b9c700 0x4001b9c968 0x4001b9cbe0] [0x4001b9c700 0x4001b9c968 0x4001b9cbe0] [0x4001b9c890 0x4001b9cb70] [0xad5158 0xad5158] 0x40037a0cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:40:18.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:40:19.802: INFO: rc: 1 Aug 30 16:40:19.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x40033964b0 exit status 1 true [0x40000117c0 0x4000011908 0x4000011940] [0x40000117c0 0x4000011908 0x4000011940] [0x4000011878 0x4000011928] [0xad5158 0xad5158] 0x4001ac5260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:40:29.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:40:31.113: INFO: rc: 1 Aug 30 16:40:31.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x40031640c0 exit status 1 true [0x40000ec508 0x4000a062f0 0x4000a06648] [0x40000ec508 0x4000a062f0 0x4000a06648] [0x4000a06228 0x4000a06430] [0xad5158 0xad5158] 0x4000d50420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:40:41.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:40:42.407: INFO: rc: 1 Aug 30 16:40:42.408: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001fb0120 exit status 1 true [0x40006710c8 0x40006712a8 0x4000671400] [0x40006710c8 0x40006712a8 0x4000671400] [0x40006711b0 0x40006713a8] [0xad5158 0xad5158] 0x40037a02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 30 16:40:52.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 30 16:40:53.712: INFO: rc: 1 Aug 30 16:40:53.713: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Aug 30 16:40:53.713: INFO: Scaling statefulset ss to 0 Aug 30 16:40:53.725: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 30 16:40:53.733: INFO: Deleting all statefulset in ns statefulset-7448 Aug 30 16:40:53.741: INFO: Scaling statefulset ss to 0 Aug 30 16:40:53.752: INFO: Waiting for statefulset status.replicas updated to 0 Aug 30 16:40:53.754: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:40:53.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7448" for this suite. Aug 30 16:41:01.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:41:02.057: INFO: namespace statefulset-7448 deletion completed in 8.270980437s • [SLOW TEST:380.803 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:41:02.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 30 16:41:02.161: INFO: Waiting up to 5m0s for pod "pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a" in namespace "emptydir-7387" to be "success or failure" Aug 30 16:41:02.181: INFO: Pod "pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.035643ms Aug 30 16:41:04.187: INFO: Pod "pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025488886s Aug 30 16:41:06.200: INFO: Pod "pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a": Phase="Running", Reason="", readiness=true. Elapsed: 4.03869461s Aug 30 16:41:08.209: INFO: Pod "pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047268851s STEP: Saw pod success Aug 30 16:41:08.209: INFO: Pod "pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a" satisfied condition "success or failure" Aug 30 16:41:08.213: INFO: Trying to get logs from node iruya-worker2 pod pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a container test-container: STEP: delete the pod Aug 30 16:41:08.242: INFO: Waiting for pod pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a to disappear Aug 30 16:41:08.246: INFO: Pod pod-e205945a-2c4f-4e9f-8b2e-4080a5079c6a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:41:08.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7387" for this suite. Aug 30 16:41:14.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:41:14.402: INFO: namespace emptydir-7387 deletion completed in 6.148580079s • [SLOW TEST:12.344 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:41:14.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:41:14.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2105" for this suite. Aug 30 16:41:20.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:41:20.857: INFO: namespace services-2105 deletion completed in 6.179217187s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.454 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:41:20.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-9078ebd5-3370-4af3-8e8a-eaf4371418a4 STEP: Creating secret with name s-test-opt-upd-b01ed5c2-d091-4842-82b8-4aada9c2fdf5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9078ebd5-3370-4af3-8e8a-eaf4371418a4 STEP: Updating secret s-test-opt-upd-b01ed5c2-d091-4842-82b8-4aada9c2fdf5 STEP: Creating secret with name s-test-opt-create-d3573791-79ef-4728-bc8f-4cfac2634e27 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:42:57.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5147" for this suite. Aug 30 16:43:21.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:43:21.767: INFO: namespace secrets-5147 deletion completed in 24.129919637s • [SLOW TEST:120.908 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:43:21.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0830 16:43:22.750037 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 30 16:43:22.751: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:43:22.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5016" for this suite. Aug 30 16:43:28.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:43:29.046: INFO: namespace gc-5016 deletion completed in 6.28829023s • [SLOW TEST:7.276 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:43:29.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:43:33.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4974" for this suite. Aug 30 16:44:13.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:44:13.489: INFO: namespace kubelet-test-4974 deletion completed in 40.159338895s • [SLOW TEST:44.440 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:44:13.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 16:44:13.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784" in namespace "downward-api-5140" to be "success or failure" Aug 30 16:44:13.614: INFO: Pod "downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784": Phase="Pending", Reason="", readiness=false. Elapsed: 9.486152ms Aug 30 16:44:15.820: INFO: Pod "downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215497671s Aug 30 16:44:17.827: INFO: Pod "downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784": Phase="Running", Reason="", readiness=true. Elapsed: 4.223022919s Aug 30 16:44:19.834: INFO: Pod "downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230198692s STEP: Saw pod success Aug 30 16:44:19.835: INFO: Pod "downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784" satisfied condition "success or failure" Aug 30 16:44:19.840: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784 container client-container: STEP: delete the pod Aug 30 16:44:19.872: INFO: Waiting for pod downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784 to disappear Aug 30 16:44:19.895: INFO: Pod downwardapi-volume-cb7746c2-9aad-40d3-b4e3-308089c19784 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:44:19.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5140" for this suite. Aug 30 16:44:25.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:44:26.079: INFO: namespace downward-api-5140 deletion completed in 6.174854208s • [SLOW TEST:12.585 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:44:26.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 30 16:44:26.222: INFO: Waiting up to 5m0s for pod "downward-api-082e0170-7375-4c02-b41b-524b7f564e51" in namespace "downward-api-6522" to be "success or failure" Aug 30 16:44:26.237: INFO: Pod "downward-api-082e0170-7375-4c02-b41b-524b7f564e51": Phase="Pending", Reason="", readiness=false. Elapsed: 14.614126ms Aug 30 16:44:28.245: INFO: Pod "downward-api-082e0170-7375-4c02-b41b-524b7f564e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022033718s Aug 30 16:44:30.253: INFO: Pod "downward-api-082e0170-7375-4c02-b41b-524b7f564e51": Phase="Running", Reason="", readiness=true. Elapsed: 4.03006806s Aug 30 16:44:32.260: INFO: Pod "downward-api-082e0170-7375-4c02-b41b-524b7f564e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037390578s STEP: Saw pod success Aug 30 16:44:32.261: INFO: Pod "downward-api-082e0170-7375-4c02-b41b-524b7f564e51" satisfied condition "success or failure" Aug 30 16:44:32.266: INFO: Trying to get logs from node iruya-worker pod downward-api-082e0170-7375-4c02-b41b-524b7f564e51 container dapi-container: STEP: delete the pod Aug 30 16:44:32.334: INFO: Waiting for pod downward-api-082e0170-7375-4c02-b41b-524b7f564e51 to disappear Aug 30 16:44:32.352: INFO: Pod downward-api-082e0170-7375-4c02-b41b-524b7f564e51 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:44:32.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6522" for this suite. Aug 30 16:44:38.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:44:38.516: INFO: namespace downward-api-6522 deletion completed in 6.157602188s • [SLOW TEST:12.436 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:44:38.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 30 16:44:44.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-20d77cba-dbcd-4213-9260-55ac88dfb94c -c busybox-main-container --namespace=emptydir-5268 -- cat /usr/share/volumeshare/shareddata.txt' Aug 30 16:44:46.199: INFO: stderr: "I0830 16:44:46.068300 1267 log.go:172] (0x4000792840) (0x40008c4a00) Create stream\nI0830 16:44:46.075547 1267 log.go:172] (0x4000792840) (0x40008c4a00) Stream added, broadcasting: 1\nI0830 16:44:46.090841 1267 log.go:172] (0x4000792840) Reply frame received for 1\nI0830 16:44:46.091523 1267 log.go:172] (0x4000792840) (0x40008c4000) Create stream\nI0830 16:44:46.091596 1267 log.go:172] (0x4000792840) (0x40008c4000) Stream added, broadcasting: 3\nI0830 16:44:46.093059 1267 log.go:172] (0x4000792840) Reply frame received for 3\nI0830 16:44:46.093393 1267 log.go:172] (0x4000792840) (0x40008c40a0) Create stream\nI0830 16:44:46.093460 1267 log.go:172] (0x4000792840) (0x40008c40a0) Stream added, broadcasting: 5\nI0830 16:44:46.094652 1267 log.go:172] (0x4000792840) Reply frame received for 5\nI0830 16:44:46.175904 1267 log.go:172] (0x4000792840) Data frame received for 5\nI0830 16:44:46.176305 1267 log.go:172] (0x4000792840) Data frame received for 3\nI0830 16:44:46.176540 1267 log.go:172] (0x40008c4000) (3) Data frame handling\nI0830 16:44:46.176862 1267 log.go:172] (0x40008c40a0) (5) Data frame handling\nI0830 16:44:46.177137 1267 log.go:172] (0x4000792840) Data frame received for 1\nI0830 16:44:46.177315 1267 log.go:172] (0x40008c4a00) (1) Data frame handling\nI0830 16:44:46.178712 1267 log.go:172] (0x40008c4000) (3) Data frame sent\nI0830 16:44:46.179275 1267 log.go:172] (0x4000792840) Data frame received for 3\nI0830 16:44:46.179386 1267 log.go:172] (0x40008c4000) (3) Data frame handling\nI0830 16:44:46.179520 1267 log.go:172] (0x40008c4a00) (1) Data frame sent\nI0830 16:44:46.180212 1267 log.go:172] (0x4000792840) (0x40008c4a00) Stream removed, broadcasting: 1\nI0830 16:44:46.182029 1267 log.go:172] (0x4000792840) Go away received\nI0830 16:44:46.184956 1267 log.go:172] (0x4000792840) (0x40008c4a00) Stream removed, broadcasting: 1\nI0830 16:44:46.185204 1267 log.go:172] (0x4000792840) (0x40008c4000) Stream removed, broadcasting: 3\nI0830 16:44:46.185410 1267 log.go:172] (0x4000792840) (0x40008c40a0) Stream removed, broadcasting: 5\n" Aug 30 16:44:46.200: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:44:46.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5268" for this suite. Aug 30 16:44:52.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:44:52.404: INFO: namespace emptydir-5268 deletion completed in 6.195279602s • [SLOW TEST:13.887 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:44:52.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 16:44:52.537: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 30 16:44:52.554: INFO: Number of nodes with available pods: 0 Aug 30 16:44:52.565: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 30 16:44:52.646: INFO: Number of nodes with available pods: 0 Aug 30 16:44:52.646: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:44:53.740: INFO: Number of nodes with available pods: 0 Aug 30 16:44:53.740: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:44:54.653: INFO: Number of nodes with available pods: 0 Aug 30 16:44:54.653: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:44:55.652: INFO: Number of nodes with available pods: 0 Aug 30 16:44:55.652: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:44:56.651: INFO: Number of nodes with available pods: 0 Aug 30 16:44:56.652: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:44:57.656: INFO: Number of nodes with available pods: 1 Aug 30 16:44:57.656: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 30 16:44:57.712: INFO: Number of nodes with available pods: 1 Aug 30 16:44:57.712: INFO: Number of running nodes: 0, number of available pods: 1 Aug 30 16:44:58.720: INFO: Number of nodes with available pods: 0 Aug 30 16:44:58.720: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 30 16:44:58.757: INFO: Number of nodes with available pods: 0 Aug 30 16:44:58.758: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:44:59.765: INFO: Number of nodes with available pods: 0 Aug 30 16:44:59.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:00.766: INFO: Number of nodes with available pods: 0 Aug 30 16:45:00.766: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:01.764: INFO: Number of nodes with available pods: 0 Aug 30 16:45:01.764: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:02.766: INFO: Number of nodes with available pods: 0 Aug 30 16:45:02.766: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:03.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:03.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:04.764: INFO: Number of nodes with available pods: 0 Aug 30 16:45:04.764: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:05.764: INFO: Number of nodes with available pods: 0 Aug 30 16:45:05.764: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:06.766: INFO: Number of nodes with available pods: 0 Aug 30 16:45:06.766: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:07.766: INFO: Number of nodes with available pods: 0 Aug 30 16:45:07.766: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:08.766: INFO: Number of nodes with available pods: 0 Aug 30 16:45:08.766: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:09.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:09.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:10.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:10.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:11.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:11.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:12.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:12.766: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:13.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:13.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:14.764: INFO: Number of nodes with available pods: 0 Aug 30 16:45:14.764: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:15.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:15.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:16.765: INFO: Number of nodes with available pods: 0 Aug 30 16:45:16.765: INFO: Node iruya-worker is running more than one daemon pod Aug 30 16:45:17.765: INFO: Number of nodes with available pods: 1 Aug 30 16:45:17.765: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-858, will wait for the garbage collector to delete the pods Aug 30 16:45:17.844: INFO: Deleting DaemonSet.extensions daemon-set took: 9.299607ms Aug 30 16:45:18.145: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.042812ms Aug 30 16:45:33.436: INFO: Number of nodes with available pods: 0 Aug 30 16:45:33.437: INFO: Number of running nodes: 0, number of available pods: 0 Aug 30 16:45:33.475: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-858/daemonsets","resourceVersion":"4058554"},"items":null} Aug 30 16:45:33.482: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-858/pods","resourceVersion":"4058554"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:45:33.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-858" for this suite. Aug 30 16:45:39.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:45:39.715: INFO: namespace daemonsets-858 deletion completed in 6.180971789s • [SLOW TEST:47.311 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:45:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-e9c01dae-6aa9-4665-9182-9fd8d1b30565 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:45:45.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4567" for this suite. Aug 30 16:46:09.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:46:10.118: INFO: namespace configmap-4567 deletion completed in 24.174710182s • [SLOW TEST:30.397 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:46:10.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 16:46:10.184: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:46:14.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1135" for this suite. Aug 30 16:46:52.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:46:52.473: INFO: namespace pods-1135 deletion completed in 38.17340178s • [SLOW TEST:42.351 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:46:52.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 30 16:46:52.613: INFO: Waiting up to 5m0s for pod "pod-35613e11-9e30-4352-9ca1-5e5f48157c31" in namespace "emptydir-1473" to be "success or failure" Aug 30 16:46:52.622: INFO: Pod "pod-35613e11-9e30-4352-9ca1-5e5f48157c31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213156ms Aug 30 16:46:54.629: INFO: Pod "pod-35613e11-9e30-4352-9ca1-5e5f48157c31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01557665s Aug 30 16:46:56.636: INFO: Pod "pod-35613e11-9e30-4352-9ca1-5e5f48157c31": Phase="Running", Reason="", readiness=true. Elapsed: 4.022953215s Aug 30 16:46:58.643: INFO: Pod "pod-35613e11-9e30-4352-9ca1-5e5f48157c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029469336s STEP: Saw pod success Aug 30 16:46:58.643: INFO: Pod "pod-35613e11-9e30-4352-9ca1-5e5f48157c31" satisfied condition "success or failure" Aug 30 16:46:58.647: INFO: Trying to get logs from node iruya-worker pod pod-35613e11-9e30-4352-9ca1-5e5f48157c31 container test-container: STEP: delete the pod Aug 30 16:46:58.696: INFO: Waiting for pod pod-35613e11-9e30-4352-9ca1-5e5f48157c31 to disappear Aug 30 16:46:58.701: INFO: Pod pod-35613e11-9e30-4352-9ca1-5e5f48157c31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:46:58.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1473" for this suite. Aug 30 16:47:04.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:47:04.874: INFO: namespace emptydir-1473 deletion completed in 6.16608081s • [SLOW TEST:12.398 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:47:04.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Aug 30 16:47:05.008: INFO: Waiting up to 5m0s for pod "client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98" in namespace "containers-9013" to be "success or failure" Aug 30 16:47:05.043: INFO: Pod "client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98": Phase="Pending", Reason="", readiness=false. Elapsed: 34.751874ms Aug 30 16:47:07.139: INFO: Pod "client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131342921s Aug 30 16:47:09.199: INFO: Pod "client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.191087606s STEP: Saw pod success Aug 30 16:47:09.199: INFO: Pod "client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98" satisfied condition "success or failure" Aug 30 16:47:09.203: INFO: Trying to get logs from node iruya-worker2 pod client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98 container test-container: STEP: delete the pod Aug 30 16:47:09.242: INFO: Waiting for pod client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98 to disappear Aug 30 16:47:09.245: INFO: Pod client-containers-f85374f4-9d14-414e-b42b-4789a35e8c98 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:47:09.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9013" for this suite. Aug 30 16:47:15.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:47:15.381: INFO: namespace containers-9013 deletion completed in 6.129610121s • [SLOW TEST:10.506 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:47:15.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-9ad9427c-b6d0-44c7-b199-ffb5b290fb2d STEP: Creating a pod to test consume secrets Aug 30 16:47:15.523: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b" in namespace "projected-486" to be "success or failure" Aug 30 16:47:15.533: INFO: Pod "pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.732399ms Aug 30 16:47:17.539: INFO: Pod "pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01514175s Aug 30 16:47:19.545: INFO: Pod "pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.02136094s Aug 30 16:47:21.564: INFO: Pod "pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040495883s STEP: Saw pod success Aug 30 16:47:21.565: INFO: Pod "pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b" satisfied condition "success or failure" Aug 30 16:47:21.570: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b container secret-volume-test: STEP: delete the pod Aug 30 16:47:21.610: INFO: Waiting for pod pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b to disappear Aug 30 16:47:21.614: INFO: Pod pod-projected-secrets-db89364c-8f9a-411c-847e-7115dd4fff4b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:47:21.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-486" for this suite. Aug 30 16:47:27.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:47:27.797: INFO: namespace projected-486 deletion completed in 6.17549253s • [SLOW TEST:12.415 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:47:27.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 16:47:27.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d" in namespace "projected-2277" to be "success or failure" Aug 30 16:47:27.892: INFO: Pod "downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.879364ms Aug 30 16:47:30.134: INFO: Pod "downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248388805s Aug 30 16:47:32.225: INFO: Pod "downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.339830482s STEP: Saw pod success Aug 30 16:47:32.226: INFO: Pod "downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d" satisfied condition "success or failure" Aug 30 16:47:32.249: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d container client-container: STEP: delete the pod Aug 30 16:47:32.271: INFO: Waiting for pod downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d to disappear Aug 30 16:47:32.288: INFO: Pod downwardapi-volume-ed7b6e13-9576-45f2-8c4c-c7b76095140d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:47:32.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2277" for this suite. Aug 30 16:47:38.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:47:38.462: INFO: namespace projected-2277 deletion completed in 6.165242243s • [SLOW TEST:10.662 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:47:38.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 16:47:38.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 30 16:47:39.805: INFO: stderr: "" Aug 30 16:47:39.805: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:47:39.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2909" for this suite. Aug 30 16:47:45.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:47:45.996: INFO: namespace kubectl-2909 deletion completed in 6.180986609s • [SLOW TEST:7.531 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:47:45.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2865.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2865.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.19.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.19.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.19.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.19.197_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2865.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2865.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2865.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2865.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2865.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.19.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.19.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.19.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.19.197_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 30 16:47:54.252: INFO: Unable to read wheezy_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.262: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.266: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.304: INFO: Unable to read jessie_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.309: INFO: Unable to read jessie_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.331: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.337: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:54.381: INFO: Lookups using dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616 failed for: [wheezy_udp@dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_udp@dns-test-service.dns-2865.svc.cluster.local jessie_tcp@dns-test-service.dns-2865.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local] Aug 30 16:47:59.389: INFO: Unable to read wheezy_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.394: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.398: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.402: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.458: INFO: Unable to read jessie_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.463: INFO: Unable to read jessie_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.467: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.471: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:47:59.504: INFO: Lookups using dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616 failed for: [wheezy_udp@dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_udp@dns-test-service.dns-2865.svc.cluster.local jessie_tcp@dns-test-service.dns-2865.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local] Aug 30 16:48:04.388: INFO: Unable to read wheezy_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.393: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.397: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.401: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.427: INFO: Unable to read jessie_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.435: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.440: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:04.465: INFO: Lookups using dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616 failed for: [wheezy_udp@dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_udp@dns-test-service.dns-2865.svc.cluster.local jessie_tcp@dns-test-service.dns-2865.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local] Aug 30 16:48:09.389: INFO: Unable to read wheezy_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.394: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.399: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.432: INFO: Unable to read jessie_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.436: INFO: Unable to read jessie_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.440: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.445: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:09.470: INFO: Lookups using dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616 failed for: [wheezy_udp@dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_udp@dns-test-service.dns-2865.svc.cluster.local jessie_tcp@dns-test-service.dns-2865.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local] Aug 30 16:48:14.387: INFO: Unable to read wheezy_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.402: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.406: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.411: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.441: INFO: Unable to read jessie_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.445: INFO: Unable to read jessie_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.449: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.454: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:14.505: INFO: Lookups using dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616 failed for: [wheezy_udp@dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_udp@dns-test-service.dns-2865.svc.cluster.local jessie_tcp@dns-test-service.dns-2865.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local] Aug 30 16:48:19.542: INFO: Unable to read wheezy_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.551: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.555: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.581: INFO: Unable to read jessie_udp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.585: INFO: Unable to read jessie_tcp@dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.589: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.593: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local from pod dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616: the server could not find the requested resource (get pods dns-test-0855bd5d-3bea-41a8-b92c-30a741480616) Aug 30 16:48:19.672: INFO: Lookups using dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616 failed for: [wheezy_udp@dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@dns-test-service.dns-2865.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_udp@dns-test-service.dns-2865.svc.cluster.local jessie_tcp@dns-test-service.dns-2865.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2865.svc.cluster.local] Aug 30 16:48:24.476: INFO: DNS probes using dns-2865/dns-test-0855bd5d-3bea-41a8-b92c-30a741480616 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:48:25.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2865" for this suite. Aug 30 16:48:31.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:48:31.810: INFO: namespace dns-2865 deletion completed in 6.150175455s • [SLOW TEST:45.811 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:48:31.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-0baeb843-bc01-454e-9033-a0d33b727c78 STEP: Creating a pod to test consume configMaps Aug 30 16:48:31.956: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1" in namespace "configmap-6985" to be "success or failure" Aug 30 16:48:31.976: INFO: Pod "pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.463486ms Aug 30 16:48:33.984: INFO: Pod "pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027911769s Aug 30 16:48:35.990: INFO: Pod "pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033773534s Aug 30 16:48:37.996: INFO: Pod "pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040023205s STEP: Saw pod success Aug 30 16:48:37.996: INFO: Pod "pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1" satisfied condition "success or failure" Aug 30 16:48:38.000: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1 container configmap-volume-test: STEP: delete the pod Aug 30 16:48:38.032: INFO: Waiting for pod pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1 to disappear Aug 30 16:48:38.063: INFO: Pod pod-configmaps-cdb8ae5c-48ee-4922-ab44-0d7b069155f1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:48:38.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6985" for this suite. Aug 30 16:48:44.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:48:44.234: INFO: namespace configmap-6985 deletion completed in 6.161134601s • [SLOW TEST:12.423 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:48:44.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 30 16:48:44.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3423,SelfLink:/api/v1/namespaces/watch-3423/configmaps/e2e-watch-test-watch-closed,UID:63fb079b-e26e-4b15-b122-fea53f05c23f,ResourceVersion:4059190,Generation:0,CreationTimestamp:2020-08-30 16:48:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 30 16:48:44.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3423,SelfLink:/api/v1/namespaces/watch-3423/configmaps/e2e-watch-test-watch-closed,UID:63fb079b-e26e-4b15-b122-fea53f05c23f,ResourceVersion:4059191,Generation:0,CreationTimestamp:2020-08-30 16:48:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 30 16:48:44.390: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3423,SelfLink:/api/v1/namespaces/watch-3423/configmaps/e2e-watch-test-watch-closed,UID:63fb079b-e26e-4b15-b122-fea53f05c23f,ResourceVersion:4059192,Generation:0,CreationTimestamp:2020-08-30 16:48:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 30 16:48:44.391: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3423,SelfLink:/api/v1/namespaces/watch-3423/configmaps/e2e-watch-test-watch-closed,UID:63fb079b-e26e-4b15-b122-fea53f05c23f,ResourceVersion:4059193,Generation:0,CreationTimestamp:2020-08-30 16:48:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:48:44.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3423" for this suite. Aug 30 16:48:50.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:48:50.638: INFO: namespace watch-3423 deletion completed in 6.224521723s • [SLOW TEST:6.403 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:48:50.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 30 16:48:50.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4947' Aug 30 16:48:52.151: INFO: stderr: "" Aug 30 16:48:52.151: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 30 16:48:57.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4947 -o json' Aug 30 16:49:01.406: INFO: stderr: "" Aug 30 16:49:01.406: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-30T16:48:52Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4947\",\n \"resourceVersion\": \"4059229\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4947/pods/e2e-test-nginx-pod\",\n \"uid\": \"33cb41a7-a774-4706-84c8-b8b0f0587403\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6vjng\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6vjng\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6vjng\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-30T16:48:52Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-30T16:48:55Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-30T16:48:55Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-30T16:48:52Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://eabb366b55294599cb180f2c4e07407d2ee55a4d6e70f957805d92d39bd30294\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-30T16:48:54Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.9\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.86\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-30T16:48:52Z\"\n }\n}\n" STEP: replace the image in the pod Aug 30 16:49:01.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4947' Aug 30 16:49:03.275: INFO: stderr: "" Aug 30 16:49:03.276: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Aug 30 16:49:03.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4947' Aug 30 16:49:13.352: INFO: stderr: "" Aug 30 16:49:13.352: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:49:13.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4947" for this suite. Aug 30 16:49:19.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:49:19.570: INFO: namespace kubectl-4947 deletion completed in 6.180843841s • [SLOW TEST:28.931 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:49:19.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-869cc984-30fa-4bff-977c-f0b3d71e0a79 STEP: Creating a pod to test consume secrets Aug 30 16:49:19.743: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa" in namespace "projected-8264" to be "success or failure" Aug 30 16:49:19.751: INFO: Pod "pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472729ms Aug 30 16:49:21.757: INFO: Pod "pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013986312s Aug 30 16:49:23.763: INFO: Pod "pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020349122s Aug 30 16:49:25.770: INFO: Pod "pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026896035s STEP: Saw pod success Aug 30 16:49:25.770: INFO: Pod "pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa" satisfied condition "success or failure" Aug 30 16:49:25.774: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa container projected-secret-volume-test: STEP: delete the pod Aug 30 16:49:25.807: INFO: Waiting for pod pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa to disappear Aug 30 16:49:25.811: INFO: Pod pod-projected-secrets-4a4b5dfa-f274-4fae-8bd8-2bd52a5a1afa no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:49:25.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8264" for this suite. Aug 30 16:49:31.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:49:31.980: INFO: namespace projected-8264 deletion completed in 6.162168553s • [SLOW TEST:12.409 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:49:31.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39 Aug 30 16:49:32.093: INFO: Pod name my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39: Found 0 pods out of 1 Aug 30 16:49:37.102: INFO: Pod name my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39: Found 1 pods out of 1 Aug 30 16:49:37.102: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39" are running Aug 30 16:49:37.107: INFO: Pod "my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39-kvfq2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-30 16:49:32 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-30 16:49:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-30 16:49:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-30 16:49:32 +0000 UTC Reason: Message:}]) Aug 30 16:49:37.108: INFO: Trying to dial the pod Aug 30 16:49:42.133: INFO: Controller my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39: Got expected result from replica 1 [my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39-kvfq2]: "my-hostname-basic-eba61ad8-debb-42e1-9d8d-f2ec841b6f39-kvfq2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:49:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-440" for this suite. Aug 30 16:49:48.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:49:48.301: INFO: namespace replication-controller-440 deletion completed in 6.16097944s • [SLOW TEST:16.317 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:49:48.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 30 16:49:48.451: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:49:57.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5863" for this suite. Aug 30 16:50:03.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:50:03.493: INFO: namespace init-container-5863 deletion completed in 6.184604692s • [SLOW TEST:15.190 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:50:03.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Aug 30 16:50:03.570: INFO: Waiting up to 5m0s for pod "pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7" in namespace "emptydir-5813" to be "success or failure" Aug 30 16:50:03.579: INFO: Pod "pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450651ms Aug 30 16:50:06.017: INFO: Pod "pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447330903s Aug 30 16:50:08.025: INFO: Pod "pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.454982046s Aug 30 16:50:10.033: INFO: Pod "pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.462969579s STEP: Saw pod success Aug 30 16:50:10.033: INFO: Pod "pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7" satisfied condition "success or failure" Aug 30 16:50:10.039: INFO: Trying to get logs from node iruya-worker pod pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7 container test-container: STEP: delete the pod Aug 30 16:50:10.083: INFO: Waiting for pod pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7 to disappear Aug 30 16:50:10.094: INFO: Pod pod-1c8ff873-8304-4ee5-af2f-f1cb2c6754d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:50:10.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5813" for this suite. Aug 30 16:50:16.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:50:16.246: INFO: namespace emptydir-5813 deletion completed in 6.141944876s • [SLOW TEST:12.749 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:50:16.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Aug 30 16:50:16.926: INFO: created pod pod-service-account-defaultsa Aug 30 16:50:16.926: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 30 16:50:16.937: INFO: created pod pod-service-account-mountsa Aug 30 16:50:16.937: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 30 16:50:17.005: INFO: created pod pod-service-account-nomountsa Aug 30 16:50:17.005: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 30 16:50:17.040: INFO: created pod pod-service-account-defaultsa-mountspec Aug 30 16:50:17.040: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 30 16:50:17.072: INFO: created pod pod-service-account-mountsa-mountspec Aug 30 16:50:17.072: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 30 16:50:17.122: INFO: created pod pod-service-account-nomountsa-mountspec Aug 30 16:50:17.122: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 30 16:50:17.135: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 30 16:50:17.135: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 30 16:50:17.193: INFO: created pod pod-service-account-mountsa-nomountspec Aug 30 16:50:17.193: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 30 16:50:17.241: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 30 16:50:17.241: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:50:17.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7606" for this suite. Aug 30 16:50:49.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:50:49.629: INFO: namespace svcaccounts-7606 deletion completed in 32.276886533s • [SLOW TEST:33.381 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:50:49.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-93b2c893-1f1d-441c-829c-21ff85fec3d0 in namespace container-probe-6073 Aug 30 16:50:55.767: INFO: Started pod liveness-93b2c893-1f1d-441c-829c-21ff85fec3d0 in namespace container-probe-6073 STEP: checking the pod's current state and verifying that restartCount is present Aug 30 16:50:55.772: INFO: Initial restart count of pod liveness-93b2c893-1f1d-441c-829c-21ff85fec3d0 is 0 Aug 30 16:51:13.835: INFO: Restart count of pod container-probe-6073/liveness-93b2c893-1f1d-441c-829c-21ff85fec3d0 is now 1 (18.062523523s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:51:13.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6073" for this suite. Aug 30 16:51:19.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:51:20.048: INFO: namespace container-probe-6073 deletion completed in 6.167218998s • [SLOW TEST:30.419 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:51:20.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 30 16:51:20.134: INFO: Waiting up to 5m0s for pod "pod-0cd54724-cad3-428d-93d5-201d491f86d9" in namespace "emptydir-4707" to be "success or failure" Aug 30 16:51:20.179: INFO: Pod "pod-0cd54724-cad3-428d-93d5-201d491f86d9": Phase="Pending", Reason="", readiness=false. Elapsed: 45.132121ms Aug 30 16:51:22.245: INFO: Pod "pod-0cd54724-cad3-428d-93d5-201d491f86d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111076172s Aug 30 16:51:24.252: INFO: Pod "pod-0cd54724-cad3-428d-93d5-201d491f86d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117822433s STEP: Saw pod success Aug 30 16:51:24.252: INFO: Pod "pod-0cd54724-cad3-428d-93d5-201d491f86d9" satisfied condition "success or failure" Aug 30 16:51:24.258: INFO: Trying to get logs from node iruya-worker pod pod-0cd54724-cad3-428d-93d5-201d491f86d9 container test-container: STEP: delete the pod Aug 30 16:51:24.328: INFO: Waiting for pod pod-0cd54724-cad3-428d-93d5-201d491f86d9 to disappear Aug 30 16:51:24.364: INFO: Pod pod-0cd54724-cad3-428d-93d5-201d491f86d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:51:24.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4707" for this suite. Aug 30 16:51:30.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:51:30.545: INFO: namespace emptydir-4707 deletion completed in 6.173547166s • [SLOW TEST:10.495 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:51:30.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-aff21017-72c9-455e-b16f-c279f043666e in namespace container-probe-7556 Aug 30 16:51:36.741: INFO: Started pod busybox-aff21017-72c9-455e-b16f-c279f043666e in namespace container-probe-7556 STEP: checking the pod's current state and verifying that restartCount is present Aug 30 16:51:36.745: INFO: Initial restart count of pod busybox-aff21017-72c9-455e-b16f-c279f043666e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:55:37.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7556" for this suite. Aug 30 16:55:43.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:55:44.121: INFO: namespace container-probe-7556 deletion completed in 6.19575408s • [SLOW TEST:253.572 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:55:44.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7a9014b6-863d-4830-82b9-a7d0dc5ea245 STEP: Creating a pod to test consume secrets Aug 30 16:55:44.245: INFO: Waiting up to 5m0s for pod "pod-secrets-072b62ad-e38c-4569-9258-1077685473e5" in namespace "secrets-5090" to be "success or failure" Aug 30 16:55:44.268: INFO: Pod "pod-secrets-072b62ad-e38c-4569-9258-1077685473e5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.907501ms Aug 30 16:55:46.275: INFO: Pod "pod-secrets-072b62ad-e38c-4569-9258-1077685473e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029127059s Aug 30 16:55:48.282: INFO: Pod "pod-secrets-072b62ad-e38c-4569-9258-1077685473e5": Phase="Running", Reason="", readiness=true. Elapsed: 4.035951535s Aug 30 16:55:50.289: INFO: Pod "pod-secrets-072b62ad-e38c-4569-9258-1077685473e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043705258s STEP: Saw pod success Aug 30 16:55:50.290: INFO: Pod "pod-secrets-072b62ad-e38c-4569-9258-1077685473e5" satisfied condition "success or failure" Aug 30 16:55:50.294: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-072b62ad-e38c-4569-9258-1077685473e5 container secret-env-test: STEP: delete the pod Aug 30 16:55:50.351: INFO: Waiting for pod pod-secrets-072b62ad-e38c-4569-9258-1077685473e5 to disappear Aug 30 16:55:50.390: INFO: Pod pod-secrets-072b62ad-e38c-4569-9258-1077685473e5 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:55:50.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5090" for this suite. Aug 30 16:55:56.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:55:56.603: INFO: namespace secrets-5090 deletion completed in 6.205063373s • [SLOW TEST:12.481 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:55:56.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-10f79b6a-cb85-46f8-9e9a-49c4ec1e500f [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:55:56.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-808" for this suite. Aug 30 16:56:02.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:56:02.844: INFO: namespace secrets-808 deletion completed in 6.156890732s • [SLOW TEST:6.239 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:56:02.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 30 16:56:02.964: INFO: Waiting up to 5m0s for pod "pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492" in namespace "emptydir-733" to be "success or failure" Aug 30 16:56:02.970: INFO: Pod "pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492": Phase="Pending", Reason="", readiness=false. Elapsed: 5.64905ms Aug 30 16:56:04.977: INFO: Pod "pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013160477s Aug 30 16:56:07.094: INFO: Pod "pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492": Phase="Running", Reason="", readiness=true. Elapsed: 4.130194356s Aug 30 16:56:09.101: INFO: Pod "pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137201939s STEP: Saw pod success Aug 30 16:56:09.102: INFO: Pod "pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492" satisfied condition "success or failure" Aug 30 16:56:09.107: INFO: Trying to get logs from node iruya-worker pod pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492 container test-container: STEP: delete the pod Aug 30 16:56:09.185: INFO: Waiting for pod pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492 to disappear Aug 30 16:56:09.209: INFO: Pod pod-d271bfbb-36ee-46e8-b337-b73f2c4f3492 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:56:09.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-733" for this suite. Aug 30 16:56:15.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:56:15.354: INFO: namespace emptydir-733 deletion completed in 6.138628855s • [SLOW TEST:12.509 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:56:15.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 16:56:15.435: INFO: Creating ReplicaSet my-hostname-basic-5ba17df3-1d65-4bf0-b796-c3843d3fcc79 Aug 30 16:56:15.492: INFO: Pod name my-hostname-basic-5ba17df3-1d65-4bf0-b796-c3843d3fcc79: Found 1 pods out of 1 Aug 30 16:56:15.492: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5ba17df3-1d65-4bf0-b796-c3843d3fcc79" is running Aug 30 16:56:21.504: INFO: Pod "my-hostname-basic-5ba17df3-1d65-4bf0-b796-c3843d3fcc79-w9fmw" is running (conditions: []) Aug 30 16:56:21.504: INFO: Trying to dial the pod Aug 30 16:56:26.528: INFO: Controller my-hostname-basic-5ba17df3-1d65-4bf0-b796-c3843d3fcc79: Got expected result from replica 1 [my-hostname-basic-5ba17df3-1d65-4bf0-b796-c3843d3fcc79-w9fmw]: "my-hostname-basic-5ba17df3-1d65-4bf0-b796-c3843d3fcc79-w9fmw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:56:26.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3003" for this suite. Aug 30 16:56:32.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:56:32.762: INFO: namespace replicaset-3003 deletion completed in 6.227233846s • [SLOW TEST:17.407 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:56:32.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 30 16:56:32.874: INFO: Waiting up to 5m0s for pod "pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb" in namespace "emptydir-7397" to be "success or failure" Aug 30 16:56:32.883: INFO: Pod "pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207871ms Aug 30 16:56:34.890: INFO: Pod "pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015667883s Aug 30 16:56:36.962: INFO: Pod "pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb": Phase="Running", Reason="", readiness=true. Elapsed: 4.087941451s Aug 30 16:56:38.969: INFO: Pod "pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094854532s STEP: Saw pod success Aug 30 16:56:38.969: INFO: Pod "pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb" satisfied condition "success or failure" Aug 30 16:56:38.974: INFO: Trying to get logs from node iruya-worker pod pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb container test-container: STEP: delete the pod Aug 30 16:56:39.017: INFO: Waiting for pod pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb to disappear Aug 30 16:56:39.026: INFO: Pod pod-164c31f5-0ff5-4832-b938-ff8d610c4ebb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 16:56:39.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7397" for this suite. Aug 30 16:56:45.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 16:56:45.184: INFO: namespace emptydir-7397 deletion completed in 6.149775861s • [SLOW TEST:12.418 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 16:56:45.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 16:56:45.281: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6173
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 30 16:56:51.677: INFO: Found 0 stateful pods, waiting for 3
Aug 30 16:57:01.687: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 30 16:57:01.687: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 30 16:57:01.687: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 30 16:57:11.683: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 30 16:57:11.684: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 30 16:57:11.684: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 30 16:57:11.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6173 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 30 16:57:13.252: INFO: stderr: "I0830 16:57:13.092267    1415 log.go:172] (0x40005e6370) (0x4000908820) Create stream\nI0830 16:57:13.095082    1415 log.go:172] (0x40005e6370) (0x4000908820) Stream added, broadcasting: 1\nI0830 16:57:13.110950    1415 log.go:172] (0x40005e6370) Reply frame received for 1\nI0830 16:57:13.111527    1415 log.go:172] (0x40005e6370) (0x4000908000) Create stream\nI0830 16:57:13.111588    1415 log.go:172] (0x40005e6370) (0x4000908000) Stream added, broadcasting: 3\nI0830 16:57:13.112936    1415 log.go:172] (0x40005e6370) Reply frame received for 3\nI0830 16:57:13.113181    1415 log.go:172] (0x40005e6370) (0x4000982000) Create stream\nI0830 16:57:13.113241    1415 log.go:172] (0x40005e6370) (0x4000982000) Stream added, broadcasting: 5\nI0830 16:57:13.114387    1415 log.go:172] (0x40005e6370) Reply frame received for 5\nI0830 16:57:13.169579    1415 log.go:172] (0x40005e6370) Data frame received for 5\nI0830 16:57:13.169766    1415 log.go:172] (0x4000982000) (5) Data frame handling\nI0830 16:57:13.170121    1415 log.go:172] (0x4000982000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 16:57:13.229235    1415 log.go:172] (0x40005e6370) Data frame received for 3\nI0830 16:57:13.229508    1415 log.go:172] (0x4000908000) (3) Data frame handling\nI0830 16:57:13.229701    1415 log.go:172] (0x4000908000) (3) Data frame sent\nI0830 16:57:13.230760    1415 log.go:172] (0x40005e6370) Data frame received for 3\nI0830 16:57:13.230865    1415 log.go:172] (0x4000908000) (3) Data frame handling\nI0830 16:57:13.231033    1415 log.go:172] (0x40005e6370) Data frame received for 5\nI0830 16:57:13.231219    1415 log.go:172] (0x4000982000) (5) Data frame handling\nI0830 16:57:13.232594    1415 log.go:172] (0x40005e6370) Data frame received for 1\nI0830 16:57:13.232676    1415 log.go:172] (0x4000908820) (1) Data frame handling\nI0830 16:57:13.232859    1415 log.go:172] (0x4000908820) (1) Data frame sent\nI0830 16:57:13.233325    1415 log.go:172] (0x40005e6370) (0x4000908820) Stream removed, broadcasting: 1\nI0830 16:57:13.236090    1415 log.go:172] (0x40005e6370) (0x4000908820) Stream removed, broadcasting: 1\nI0830 16:57:13.236400    1415 log.go:172] (0x40005e6370) (0x4000908000) Stream removed, broadcasting: 3\nI0830 16:57:13.240203    1415 log.go:172] (0x40005e6370) Go away received\nI0830 16:57:13.241877    1415 log.go:172] (0x40005e6370) (0x4000982000) Stream removed, broadcasting: 5\n"
Aug 30 16:57:13.253: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 30 16:57:13.253: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 30 16:57:23.348: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 30 16:57:33.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6173 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 16:57:34.935: INFO: stderr: "I0830 16:57:34.794929    1436 log.go:172] (0x40005640b0) (0x40008661e0) Create stream\nI0830 16:57:34.797185    1436 log.go:172] (0x40005640b0) (0x40008661e0) Stream added, broadcasting: 1\nI0830 16:57:34.807385    1436 log.go:172] (0x40005640b0) Reply frame received for 1\nI0830 16:57:34.808283    1436 log.go:172] (0x40005640b0) (0x400063a1e0) Create stream\nI0830 16:57:34.808359    1436 log.go:172] (0x40005640b0) (0x400063a1e0) Stream added, broadcasting: 3\nI0830 16:57:34.810052    1436 log.go:172] (0x40005640b0) Reply frame received for 3\nI0830 16:57:34.810353    1436 log.go:172] (0x40005640b0) (0x4000350000) Create stream\nI0830 16:57:34.810410    1436 log.go:172] (0x40005640b0) (0x4000350000) Stream added, broadcasting: 5\nI0830 16:57:34.811496    1436 log.go:172] (0x40005640b0) Reply frame received for 5\nI0830 16:57:34.906196    1436 log.go:172] (0x40005640b0) Data frame received for 3\nI0830 16:57:34.906453    1436 log.go:172] (0x40005640b0) Data frame received for 5\nI0830 16:57:34.906624    1436 log.go:172] (0x40005640b0) Data frame received for 1\nI0830 16:57:34.906734    1436 log.go:172] (0x40008661e0) (1) Data frame handling\nI0830 16:57:34.906812    1436 log.go:172] (0x4000350000) (5) Data frame handling\nI0830 16:57:34.906908    1436 log.go:172] (0x400063a1e0) (3) Data frame handling\nI0830 16:57:34.907600    1436 log.go:172] (0x4000350000) (5) Data frame sent\nI0830 16:57:34.907775    1436 log.go:172] (0x40005640b0) Data frame received for 5\nI0830 16:57:34.907847    1436 log.go:172] (0x40008661e0) (1) Data frame sent\nI0830 16:57:34.907944    1436 log.go:172] (0x4000350000) (5) Data frame handling\nI0830 16:57:34.908350    1436 log.go:172] (0x400063a1e0) (3) Data frame sent\nI0830 16:57:34.908464    1436 log.go:172] (0x40005640b0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0830 16:57:34.908560    1436 log.go:172] (0x400063a1e0) (3) Data frame handling\nI0830 16:57:34.909769    1436 log.go:172] (0x40005640b0) (0x40008661e0) Stream removed, broadcasting: 1\nI0830 16:57:34.913537    1436 log.go:172] (0x40005640b0) Go away received\nI0830 16:57:34.914282    1436 log.go:172] (0x40005640b0) (0x40008661e0) Stream removed, broadcasting: 1\nI0830 16:57:34.917340    1436 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x400063a1e0), 0x5:(*spdystream.Stream)(0x4000350000)}\nI0830 16:57:34.917696    1436 log.go:172] (0x40005640b0) (0x400063a1e0) Stream removed, broadcasting: 3\nI0830 16:57:34.917982    1436 log.go:172] (0x40005640b0) (0x4000350000) Stream removed, broadcasting: 5\n"
Aug 30 16:57:34.936: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 30 16:57:34.937: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

STEP: Rolling back to a previous revision
Aug 30 16:58:04.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6173 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 30 16:58:06.546: INFO: stderr: "I0830 16:58:06.349366    1459 log.go:172] (0x40006d66e0) (0x4000900820) Create stream\nI0830 16:58:06.352875    1459 log.go:172] (0x40006d66e0) (0x4000900820) Stream added, broadcasting: 1\nI0830 16:58:06.373674    1459 log.go:172] (0x40006d66e0) Reply frame received for 1\nI0830 16:58:06.374248    1459 log.go:172] (0x40006d66e0) (0x4000922000) Create stream\nI0830 16:58:06.374346    1459 log.go:172] (0x40006d66e0) (0x4000922000) Stream added, broadcasting: 3\nI0830 16:58:06.375605    1459 log.go:172] (0x40006d66e0) Reply frame received for 3\nI0830 16:58:06.375875    1459 log.go:172] (0x40006d66e0) (0x4000900000) Create stream\nI0830 16:58:06.375950    1459 log.go:172] (0x40006d66e0) (0x4000900000) Stream added, broadcasting: 5\nI0830 16:58:06.377257    1459 log.go:172] (0x40006d66e0) Reply frame received for 5\nI0830 16:58:06.466221    1459 log.go:172] (0x40006d66e0) Data frame received for 5\nI0830 16:58:06.466571    1459 log.go:172] (0x4000900000) (5) Data frame handling\nI0830 16:58:06.467253    1459 log.go:172] (0x4000900000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 16:58:06.523101    1459 log.go:172] (0x40006d66e0) Data frame received for 3\nI0830 16:58:06.523281    1459 log.go:172] (0x4000922000) (3) Data frame handling\nI0830 16:58:06.523392    1459 log.go:172] (0x4000922000) (3) Data frame sent\nI0830 16:58:06.523490    1459 log.go:172] (0x40006d66e0) Data frame received for 5\nI0830 16:58:06.523643    1459 log.go:172] (0x4000900000) (5) Data frame handling\nI0830 16:58:06.523830    1459 log.go:172] (0x40006d66e0) Data frame received for 3\nI0830 16:58:06.523897    1459 log.go:172] (0x4000922000) (3) Data frame handling\nI0830 16:58:06.525351    1459 log.go:172] (0x40006d66e0) Data frame received for 1\nI0830 16:58:06.525432    1459 log.go:172] (0x4000900820) (1) Data frame handling\nI0830 16:58:06.525506    1459 log.go:172] (0x4000900820) (1) Data frame sent\nI0830 16:58:06.526190    1459 log.go:172] (0x40006d66e0) (0x4000900820) Stream removed, broadcasting: 1\nI0830 16:58:06.528705    1459 log.go:172] (0x40006d66e0) Go away received\nI0830 16:58:06.530569    1459 log.go:172] (0x40006d66e0) (0x4000900820) Stream removed, broadcasting: 1\nI0830 16:58:06.531054    1459 log.go:172] (0x40006d66e0) (0x4000922000) Stream removed, broadcasting: 3\nI0830 16:58:06.531315    1459 log.go:172] (0x40006d66e0) (0x4000900000) Stream removed, broadcasting: 5\n"
Aug 30 16:58:06.547: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 30 16:58:06.547: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 30 16:58:16.590: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 30 16:58:26.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6173 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 16:58:28.245: INFO: stderr: "I0830 16:58:28.106614    1480 log.go:172] (0x40006964d0) (0x400086c780) Create stream\nI0830 16:58:28.112022    1480 log.go:172] (0x40006964d0) (0x400086c780) Stream added, broadcasting: 1\nI0830 16:58:28.130256    1480 log.go:172] (0x40006964d0) Reply frame received for 1\nI0830 16:58:28.130957    1480 log.go:172] (0x40006964d0) (0x40008ae000) Create stream\nI0830 16:58:28.131058    1480 log.go:172] (0x40006964d0) (0x40008ae000) Stream added, broadcasting: 3\nI0830 16:58:28.132720    1480 log.go:172] (0x40006964d0) Reply frame received for 3\nI0830 16:58:28.133228    1480 log.go:172] (0x40006964d0) (0x400086c000) Create stream\nI0830 16:58:28.133316    1480 log.go:172] (0x40006964d0) (0x400086c000) Stream added, broadcasting: 5\nI0830 16:58:28.134515    1480 log.go:172] (0x40006964d0) Reply frame received for 5\nI0830 16:58:28.223650    1480 log.go:172] (0x40006964d0) Data frame received for 5\nI0830 16:58:28.223837    1480 log.go:172] (0x40006964d0) Data frame received for 1\nI0830 16:58:28.223939    1480 log.go:172] (0x40006964d0) Data frame received for 3\nI0830 16:58:28.224067    1480 log.go:172] (0x400086c780) (1) Data frame handling\nI0830 16:58:28.224151    1480 log.go:172] (0x40008ae000) (3) Data frame handling\nI0830 16:58:28.224366    1480 log.go:172] (0x400086c000) (5) Data frame handling\nI0830 16:58:28.225161    1480 log.go:172] (0x40008ae000) (3) Data frame sent\nI0830 16:58:28.225388    1480 log.go:172] (0x400086c000) (5) Data frame sent\nI0830 16:58:28.225788    1480 log.go:172] (0x400086c780) (1) Data frame sent\nI0830 16:58:28.226018    1480 log.go:172] (0x40006964d0) Data frame received for 5\nI0830 16:58:28.226123    1480 log.go:172] (0x400086c000) (5) Data frame handling\nI0830 16:58:28.226192    1480 log.go:172] (0x40006964d0) Data frame received for 3\nI0830 16:58:28.226323    1480 log.go:172] (0x40008ae000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0830 16:58:28.227450    1480 log.go:172] (0x40006964d0) (0x400086c780) Stream removed, broadcasting: 1\nI0830 16:58:28.230101    1480 log.go:172] (0x40006964d0) Go away received\nI0830 16:58:28.233147    1480 log.go:172] (0x40006964d0) (0x400086c780) Stream removed, broadcasting: 1\nI0830 16:58:28.233510    1480 log.go:172] (0x40006964d0) (0x40008ae000) Stream removed, broadcasting: 3\nI0830 16:58:28.233712    1480 log.go:172] (0x40006964d0) (0x400086c000) Stream removed, broadcasting: 5\n"
Aug 30 16:58:28.246: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 30 16:58:28.246: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 30 16:58:38.281: INFO: Waiting for StatefulSet statefulset-6173/ss2 to complete update
Aug 30 16:58:38.282: INFO: Waiting for Pod statefulset-6173/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 30 16:58:38.282: INFO: Waiting for Pod statefulset-6173/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 30 16:58:38.282: INFO: Waiting for Pod statefulset-6173/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 30 16:58:48.335: INFO: Waiting for StatefulSet statefulset-6173/ss2 to complete update
Aug 30 16:58:48.335: INFO: Waiting for Pod statefulset-6173/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 30 16:58:48.335: INFO: Waiting for Pod statefulset-6173/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 30 16:58:58.297: INFO: Waiting for StatefulSet statefulset-6173/ss2 to complete update
Aug 30 16:58:58.297: INFO: Waiting for Pod statefulset-6173/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 30 16:59:08.299: INFO: Deleting all statefulset in ns statefulset-6173
Aug 30 16:59:08.304: INFO: Scaling statefulset ss2 to 0
Aug 30 16:59:28.337: INFO: Waiting for statefulset status.replicas updated to 0
Aug 30 16:59:28.341: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 16:59:28.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6173" for this suite.
Aug 30 16:59:34.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 16:59:34.508: INFO: namespace statefulset-6173 deletion completed in 6.145746074s

• [SLOW TEST:162.959 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 16:59:34.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 30 16:59:34.673: INFO: Waiting up to 5m0s for pod "downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa" in namespace "downward-api-813" to be "success or failure"
Aug 30 16:59:34.685: INFO: Pod "downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa": Phase="Pending", Reason="", readiness=false. Elapsed: 11.390523ms
Aug 30 16:59:36.856: INFO: Pod "downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18282946s
Aug 30 16:59:38.862: INFO: Pod "downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa": Phase="Running", Reason="", readiness=true. Elapsed: 4.189270495s
Aug 30 16:59:40.870: INFO: Pod "downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.196384118s
STEP: Saw pod success
Aug 30 16:59:40.870: INFO: Pod "downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa" satisfied condition "success or failure"
Aug 30 16:59:40.875: INFO: Trying to get logs from node iruya-worker pod downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa container dapi-container: 
STEP: delete the pod
Aug 30 16:59:40.900: INFO: Waiting for pod downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa to disappear
Aug 30 16:59:40.971: INFO: Pod downward-api-ba9a8206-29db-4f9e-96e8-cd866b6c5caa no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 16:59:40.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-813" for this suite.
Aug 30 16:59:47.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 16:59:47.174: INFO: namespace downward-api-813 deletion completed in 6.192346099s

• [SLOW TEST:12.665 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 16:59:47.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 30 16:59:52.333: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 16:59:52.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2052" for this suite.
Aug 30 17:00:14.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:00:14.594: INFO: namespace replicaset-2052 deletion completed in 22.207207261s

• [SLOW TEST:27.418 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:00:14.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c51ce311-b184-47f3-9b9c-9349b56cb416
STEP: Creating a pod to test consume secrets
Aug 30 17:00:14.787: INFO: Waiting up to 5m0s for pod "pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683" in namespace "secrets-67" to be "success or failure"
Aug 30 17:00:14.858: INFO: Pod "pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683": Phase="Pending", Reason="", readiness=false. Elapsed: 70.133109ms
Aug 30 17:00:17.115: INFO: Pod "pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327535993s
Aug 30 17:00:19.121: INFO: Pod "pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333091753s
Aug 30 17:00:21.128: INFO: Pod "pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.3405037s
STEP: Saw pod success
Aug 30 17:00:21.128: INFO: Pod "pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683" satisfied condition "success or failure"
Aug 30 17:00:21.133: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683 container secret-volume-test: 
STEP: delete the pod
Aug 30 17:00:21.157: INFO: Waiting for pod pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683 to disappear
Aug 30 17:00:21.161: INFO: Pod pod-secrets-c3a8ad7c-6c6e-4fa1-b05b-600c1d1c4683 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:00:21.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-67" for this suite.
Aug 30 17:00:27.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:00:27.326: INFO: namespace secrets-67 deletion completed in 6.155574249s

• [SLOW TEST:12.731 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:00:27.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:00:27.441: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 30 17:00:28.552: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:00:28.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2528" for this suite.
Aug 30 17:00:34.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:00:34.948: INFO: namespace replication-controller-2528 deletion completed in 6.362418086s

• [SLOW TEST:7.616 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:00:34.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-020be743-a212-4ab3-986a-71c78b251ccc
STEP: Creating a pod to test consume configMaps
Aug 30 17:00:35.496: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee" in namespace "projected-2878" to be "success or failure"
Aug 30 17:00:35.516: INFO: Pod "pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee": Phase="Pending", Reason="", readiness=false. Elapsed: 20.023992ms
Aug 30 17:00:37.523: INFO: Pod "pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027320717s
Aug 30 17:00:39.559: INFO: Pod "pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063373652s
STEP: Saw pod success
Aug 30 17:00:39.560: INFO: Pod "pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee" satisfied condition "success or failure"
Aug 30 17:00:39.564: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee container projected-configmap-volume-test: 
STEP: delete the pod
Aug 30 17:00:39.781: INFO: Waiting for pod pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee to disappear
Aug 30 17:00:39.796: INFO: Pod pod-projected-configmaps-e5ccbbd7-8c61-47f4-821e-6057d3bb4eee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:00:39.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2878" for this suite.
Aug 30 17:00:45.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:00:46.012: INFO: namespace projected-2878 deletion completed in 6.207205907s

• [SLOW TEST:11.063 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:00:46.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:00:46.161: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 30 17:00:46.197: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:46.209: INFO: Number of nodes with available pods: 0
Aug 30 17:00:46.209: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:00:47.222: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:47.228: INFO: Number of nodes with available pods: 0
Aug 30 17:00:47.229: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:00:48.223: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:48.229: INFO: Number of nodes with available pods: 0
Aug 30 17:00:48.229: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:00:49.373: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:49.409: INFO: Number of nodes with available pods: 0
Aug 30 17:00:49.409: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:00:50.218: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:50.223: INFO: Number of nodes with available pods: 0
Aug 30 17:00:50.223: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:00:51.222: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:51.227: INFO: Number of nodes with available pods: 0
Aug 30 17:00:51.228: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:00:52.222: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:52.229: INFO: Number of nodes with available pods: 2
Aug 30 17:00:52.230: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 30 17:00:52.326: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:52.326: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:52.345: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:53.354: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:53.354: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:53.365: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:54.353: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:54.353: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:54.362: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:55.355: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:55.356: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:55.365: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:56.353: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:56.354: INFO: Pod daemon-set-h7jpm is not available
Aug 30 17:00:56.354: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:56.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:57.356: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:57.356: INFO: Pod daemon-set-h7jpm is not available
Aug 30 17:00:57.356: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:57.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:58.352: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:58.352: INFO: Pod daemon-set-h7jpm is not available
Aug 30 17:00:58.352: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:58.360: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:00:59.354: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:59.355: INFO: Pod daemon-set-h7jpm is not available
Aug 30 17:00:59.355: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:00:59.365: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:00.354: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:00.354: INFO: Pod daemon-set-h7jpm is not available
Aug 30 17:01:00.354: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:00.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:01.355: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:01.356: INFO: Pod daemon-set-h7jpm is not available
Aug 30 17:01:01.356: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:01.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:02.354: INFO: Wrong image for pod: daemon-set-h7jpm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:02.354: INFO: Pod daemon-set-h7jpm is not available
Aug 30 17:01:02.354: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:02.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:03.362: INFO: Pod daemon-set-b9q6m is not available
Aug 30 17:01:03.362: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:03.442: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:04.352: INFO: Pod daemon-set-b9q6m is not available
Aug 30 17:01:04.352: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:04.361: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:05.597: INFO: Pod daemon-set-b9q6m is not available
Aug 30 17:01:05.598: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:05.646: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:06.353: INFO: Pod daemon-set-b9q6m is not available
Aug 30 17:01:06.354: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:06.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:07.356: INFO: Pod daemon-set-b9q6m is not available
Aug 30 17:01:07.356: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:07.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:08.352: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:08.359: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:09.411: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:09.411: INFO: Pod daemon-set-njn5f is not available
Aug 30 17:01:09.444: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:10.353: INFO: Wrong image for pod: daemon-set-njn5f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 30 17:01:10.353: INFO: Pod daemon-set-njn5f is not available
Aug 30 17:01:10.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:11.353: INFO: Pod daemon-set-knplb is not available
Aug 30 17:01:11.361: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 30 17:01:11.370: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:11.375: INFO: Number of nodes with available pods: 1
Aug 30 17:01:11.375: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 30 17:01:12.385: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:12.402: INFO: Number of nodes with available pods: 1
Aug 30 17:01:12.402: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 30 17:01:13.384: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:01:13.396: INFO: Number of nodes with available pods: 2
Aug 30 17:01:13.397: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1804, will wait for the garbage collector to delete the pods
Aug 30 17:01:13.547: INFO: Deleting DaemonSet.extensions daemon-set took: 7.232006ms
Aug 30 17:01:13.848: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.905916ms
Aug 30 17:01:23.469: INFO: Number of nodes with available pods: 0
Aug 30 17:01:23.469: INFO: Number of running nodes: 0, number of available pods: 0
Aug 30 17:01:23.473: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1804/daemonsets","resourceVersion":"4061718"},"items":null}

Aug 30 17:01:23.477: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1804/pods","resourceVersion":"4061718"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:01:23.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1804" for this suite.
Aug 30 17:01:29.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:01:29.693: INFO: namespace daemonsets-1804 deletion completed in 6.183299114s

• [SLOW TEST:43.678 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:01:29.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 30 17:01:29.796: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:01:43.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4526" for this suite.
Aug 30 17:01:49.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:01:49.809: INFO: namespace pods-4526 deletion completed in 6.151589194s

• [SLOW TEST:20.114 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:01:49.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:01:49.953: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 30 17:01:54.081: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 30 17:01:56.091: INFO: Creating deployment "test-rollover-deployment"
Aug 30 17:01:56.122: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 30 17:01:58.138: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 30 17:01:58.162: INFO: Ensure that both replica sets have 1 created replica
Aug 30 17:01:58.173: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 30 17:01:58.185: INFO: Updating deployment test-rollover-deployment
Aug 30 17:01:58.185: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 30 17:02:00.202: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 30 17:02:00.214: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 30 17:02:00.224: INFO: all replica sets need to contain the pod-template-hash label
Aug 30 17:02:00.226: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403718, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:02:02.243: INFO: all replica sets need to contain the pod-template-hash label
Aug 30 17:02:02.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403718, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:02:04.237: INFO: all replica sets need to contain the pod-template-hash label
Aug 30 17:02:04.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403722, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:02:06.242: INFO: all replica sets need to contain the pod-template-hash label
Aug 30 17:02:06.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403722, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:02:08.240: INFO: all replica sets need to contain the pod-template-hash label
Aug 30 17:02:08.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403722, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:02:10.239: INFO: all replica sets need to contain the pod-template-hash label
Aug 30 17:02:10.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403722, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:02:12.243: INFO: all replica sets need to contain the pod-template-hash label
Aug 30 17:02:12.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403722, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734403716, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:02:14.245: INFO: 
Aug 30 17:02:14.245: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 30 17:02:14.272: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9462,SelfLink:/apis/apps/v1/namespaces/deployment-9462/deployments/test-rollover-deployment,UID:8d78f8c9-9e01-4622-a744-97eb46e58baf,ResourceVersion:4061946,Generation:2,CreationTimestamp:2020-08-30 17:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-30 17:01:56 +0000 UTC 2020-08-30 17:01:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-30 17:02:12 +0000 UTC 2020-08-30 17:01:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 30 17:02:14.284: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9462,SelfLink:/apis/apps/v1/namespaces/deployment-9462/replicasets/test-rollover-deployment-854595fc44,UID:94ba8e60-08d2-44c8-a69b-4a3b820de68e,ResourceVersion:4061934,Generation:2,CreationTimestamp:2020-08-30 17:01:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8d78f8c9-9e01-4622-a744-97eb46e58baf 0x40038a4007 0x40038a4008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 30 17:02:14.284: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 30 17:02:14.285: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9462,SelfLink:/apis/apps/v1/namespaces/deployment-9462/replicasets/test-rollover-controller,UID:9687ef71-914f-448a-9f43-a4b9ab20a251,ResourceVersion:4061944,Generation:2,CreationTimestamp:2020-08-30 17:01:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8d78f8c9-9e01-4622-a744-97eb46e58baf 0x40035c5f1f 0x40035c5f30}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 30 17:02:14.287: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9462,SelfLink:/apis/apps/v1/namespaces/deployment-9462/replicasets/test-rollover-deployment-9b8b997cf,UID:6f930a85-5750-4086-8e5d-2bd33b360658,ResourceVersion:4061897,Generation:2,CreationTimestamp:2020-08-30 17:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8d78f8c9-9e01-4622-a744-97eb46e58baf 0x40038a40d0 0x40038a40d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 30 17:02:14.297: INFO: Pod "test-rollover-deployment-854595fc44-bn4f8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-bn4f8,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9462,SelfLink:/api/v1/namespaces/deployment-9462/pods/test-rollover-deployment-854595fc44-bn4f8,UID:5cb40b3c-f3f3-41d5-8447-66c522e65c13,ResourceVersion:4061912,Generation:0,CreationTimestamp:2020-08-30 17:01:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 94ba8e60-08d2-44c8-a69b-4a3b820de68e 0x4003842a57 0x4003842a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mdl7k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mdl7k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-mdl7k true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003842ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003842af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:01:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:02:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:02:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:01:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.111,StartTime:2020-08-30 17:01:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-30 17:02:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8062d5694ce2324bb9804d10d6f35517fc873380adf4a185c745e23492a82f80}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:02:14.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9462" for this suite.
Aug 30 17:02:22.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:02:22.495: INFO: namespace deployment-9462 deletion completed in 8.18731427s

• [SLOW TEST:32.683 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:02:22.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6397
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6397
STEP: Deleting pre-stop pod
Aug 30 17:02:37.694: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:02:37.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6397" for this suite.
Aug 30 17:03:15.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:03:16.022: INFO: namespace prestop-6397 deletion completed in 38.187462357s

• [SLOW TEST:53.526 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:03:16.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:03:42.198: INFO: Container started at 2020-08-30 17:03:19 +0000 UTC, pod became ready at 2020-08-30 17:03:41 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:03:42.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4654" for this suite.
Aug 30 17:04:06.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:04:06.350: INFO: namespace container-probe-4654 deletion completed in 24.142131526s

• [SLOW TEST:50.326 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:04:06.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-65d28e6e-5fd9-422e-ab54-cd27c2b568e9
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:04:06.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3778" for this suite.
Aug 30 17:04:12.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:04:12.635: INFO: namespace configmap-3778 deletion completed in 6.153480401s

• [SLOW TEST:6.284 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:04:12.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 30 17:04:12.759: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1781,SelfLink:/api/v1/namespaces/watch-1781/configmaps/e2e-watch-test-resource-version,UID:998938c3-a202-43e5-8d7a-0557e9919c0b,ResourceVersion:4062297,Generation:0,CreationTimestamp:2020-08-30 17:04:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 30 17:04:12.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1781,SelfLink:/api/v1/namespaces/watch-1781/configmaps/e2e-watch-test-resource-version,UID:998938c3-a202-43e5-8d7a-0557e9919c0b,ResourceVersion:4062298,Generation:0,CreationTimestamp:2020-08-30 17:04:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:04:12.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1781" for this suite.
Aug 30 17:04:18.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:04:18.905: INFO: namespace watch-1781 deletion completed in 6.137002751s

• [SLOW TEST:6.269 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:04:18.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-53953353-cc45-421b-b567-e5f09bcbd0e1
STEP: Creating a pod to test consume configMaps
Aug 30 17:04:19.061: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b" in namespace "projected-807" to be "success or failure"
Aug 30 17:04:19.124: INFO: Pod "pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 62.496292ms
Aug 30 17:04:21.131: INFO: Pod "pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069430583s
Aug 30 17:04:23.143: INFO: Pod "pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081544838s
STEP: Saw pod success
Aug 30 17:04:23.143: INFO: Pod "pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b" satisfied condition "success or failure"
Aug 30 17:04:23.148: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 30 17:04:23.192: INFO: Waiting for pod pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b to disappear
Aug 30 17:04:23.237: INFO: Pod pod-projected-configmaps-7ec6e5c1-5399-49a2-a9b3-215dc85f4d4b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:04:23.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-807" for this suite.
Aug 30 17:04:29.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:04:29.505: INFO: namespace projected-807 deletion completed in 6.260356361s

• [SLOW TEST:10.596 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:04:29.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 30 17:04:34.162: INFO: Successfully updated pod "labelsupdate78642078-da88-46e3-8433-80341c178fcb"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:04:38.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7446" for this suite.
Aug 30 17:05:00.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:05:00.439: INFO: namespace downward-api-7446 deletion completed in 22.186626813s

• [SLOW TEST:30.933 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:05:00.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 30 17:05:00.543: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68" in namespace "projected-8066" to be "success or failure"
Aug 30 17:05:00.549: INFO: Pod "downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68": Phase="Pending", Reason="", readiness=false. Elapsed: 5.928704ms
Aug 30 17:05:02.679: INFO: Pod "downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135522997s
Aug 30 17:05:04.686: INFO: Pod "downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142919834s
STEP: Saw pod success
Aug 30 17:05:04.687: INFO: Pod "downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68" satisfied condition "success or failure"
Aug 30 17:05:04.699: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68 container client-container: 
STEP: delete the pod
Aug 30 17:05:04.732: INFO: Waiting for pod downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68 to disappear
Aug 30 17:05:04.770: INFO: Pod downwardapi-volume-bb0bb101-a87b-4bd4-97cd-3a02b6cece68 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:05:04.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8066" for this suite.
Aug 30 17:05:10.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:05:10.939: INFO: namespace projected-8066 deletion completed in 6.160503757s

• [SLOW TEST:10.498 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:05:10.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:05:11.018: INFO: Creating deployment "nginx-deployment"
Aug 30 17:05:11.028: INFO: Waiting for observed generation 1
Aug 30 17:05:13.885: INFO: Waiting for all required pods to come up
Aug 30 17:05:14.207: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 30 17:05:24.230: INFO: Waiting for deployment "nginx-deployment" to complete
Aug 30 17:05:24.240: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug 30 17:05:24.248: INFO: Updating deployment nginx-deployment
Aug 30 17:05:24.248: INFO: Waiting for observed generation 2
Aug 30 17:05:26.263: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 30 17:05:26.267: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 30 17:05:26.275: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 30 17:05:26.285: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 30 17:05:26.286: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 30 17:05:26.297: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 30 17:05:26.839: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug 30 17:05:26.839: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug 30 17:05:26.935: INFO: Updating deployment nginx-deployment
Aug 30 17:05:26.936: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug 30 17:05:27.564: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 30 17:05:30.702: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 30 17:05:30.986: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4600,SelfLink:/apis/apps/v1/namespaces/deployment-4600/deployments/nginx-deployment,UID:959f1094-ab67-4150-8bff-676654102047,ResourceVersion:4062777,Generation:3,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-08-30 17:05:27 +0000 UTC 2020-08-30 17:05:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-30 17:05:28 +0000 UTC 2020-08-30 17:05:11 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug 30 17:05:31.083: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4600,SelfLink:/apis/apps/v1/namespaces/deployment-4600/replicasets/nginx-deployment-55fb7cb77f,UID:04bc6de5-d645-4c27-afcc-3078a6a77909,ResourceVersion:4062772,Generation:3,CreationTimestamp:2020-08-30 17:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 959f1094-ab67-4150-8bff-676654102047 0x4002845e07 0x4002845e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 30 17:05:31.083: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug 30 17:05:31.084: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4600,SelfLink:/apis/apps/v1/namespaces/deployment-4600/replicasets/nginx-deployment-7b8c6f4498,UID:8ab50557-42ba-4b9f-af2d-a340461dd935,ResourceVersion:4062755,Generation:3,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 959f1094-ab67-4150-8bff-676654102047 0x4002845ed7 0x4002845ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug 30 17:05:31.166: INFO: Pod "nginx-deployment-55fb7cb77f-4pxtr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4pxtr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-4pxtr,UID:acaed28b-4065-43d4-8628-0374277de464,ResourceVersion:4062783,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4001bef667 0x4001bef668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001bef710} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001bef730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.167: INFO: Pod "nginx-deployment-55fb7cb77f-5x69c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5x69c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-5x69c,UID:f0e7df4c-0f86-47b3-9455-d8acaba3667a,ResourceVersion:4062818,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4001bef810 0x4001bef811}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001bef890} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001bef8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.207: INFO: Pod "nginx-deployment-55fb7cb77f-7xbvk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7xbvk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-7xbvk,UID:3fecf74b-65bf-45ef-8457-4f633838880c,ResourceVersion:4062662,Generation:0,CreationTimestamp:2020-08-30 17:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4001bef990 0x4001bef991}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001befa10} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001befa30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.208: INFO: Pod "nginx-deployment-55fb7cb77f-9p5l6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9p5l6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-9p5l6,UID:2f823c58-bc1a-4626-95f9-49d304b6b95a,ResourceVersion:4062678,Generation:0,CreationTimestamp:2020-08-30 17:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4001befb00 0x4001befb01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001befb80} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001befba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.209: INFO: Pod "nginx-deployment-55fb7cb77f-9r2lf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9r2lf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-9r2lf,UID:7f9bc696-4320-47b8-b52a-b3320a8d6ecb,ResourceVersion:4062760,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4001befc70 0x4001befc71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001befcf0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001befd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.210: INFO: Pod "nginx-deployment-55fb7cb77f-bsch7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bsch7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-bsch7,UID:69e07ab2-5b5b-4927-a1a4-8f997df57992,ResourceVersion:4062769,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4001befde0 0x4001befde1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001befe60} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001befe80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.211: INFO: Pod "nginx-deployment-55fb7cb77f-f826t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f826t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-f826t,UID:7fa7703e-2659-4d42-b9b4-2cdd7d0575b5,ResourceVersion:4062842,Generation:0,CreationTimestamp:2020-08-30 17:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4001beff50 0x4001beff51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001beffe0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002290070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.122,StartTime:2020-08-30 17:05:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.212: INFO: Pod "nginx-deployment-55fb7cb77f-gchsr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gchsr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-gchsr,UID:264722ef-4106-429f-8276-1617bece76be,ResourceVersion:4062791,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x40022902c0 0x40022902c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40022903b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40022903d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.213: INFO: Pod "nginx-deployment-55fb7cb77f-h5p7f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h5p7f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-h5p7f,UID:c28fbba0-c0d2-485e-adc0-92bb84172a89,ResourceVersion:4062835,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4002290600 0x4002290601}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002290750} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002290770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.214: INFO: Pod "nginx-deployment-55fb7cb77f-jcg69" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jcg69,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-jcg69,UID:15954e9b-c78a-430b-93ea-cecf1bc61ff8,ResourceVersion:4062690,Generation:0,CreationTimestamp:2020-08-30 17:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x40022909f0 0x40022909f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002290ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002290ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.216: INFO: Pod "nginx-deployment-55fb7cb77f-ktjq5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ktjq5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-ktjq5,UID:dd370725-8c50-4f4a-a628-3393ea7487d3,ResourceVersion:4062839,Generation:0,CreationTimestamp:2020-08-30 17:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4002290bb0 0x4002290bb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002290c30} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002290d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.217: INFO: Pod "nginx-deployment-55fb7cb77f-w48lh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w48lh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-w48lh,UID:fa142bb1-fb29-4853-84c2-6874ab44f1ed,ResourceVersion:4062828,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4002290e10 0x4002290e11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002290e90} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002290ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.218: INFO: Pod "nginx-deployment-55fb7cb77f-z4kbk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z4kbk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-55fb7cb77f-z4kbk,UID:3eb60d3a-7abe-44b4-a993-e37a3d378472,ResourceVersion:4062836,Generation:0,CreationTimestamp:2020-08-30 17:05:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 04bc6de5-d645-4c27-afcc-3078a6a77909 0x4002290f90 0x4002290f91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291010} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:24 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.121,StartTime:2020-08-30 17:05:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.220: INFO: Pod "nginx-deployment-7b8c6f4498-298z8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-298z8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-298z8,UID:f75db5ee-8294-4644-80be-e4a955661c59,ResourceVersion:4062592,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291120 0x4002291121}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291190} {node.kubernetes.io/unreachable Exists  NoExecute 0x40022911b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.183,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9c44dc47beb11cf3b770e5172c3a6b5e87da9786c46a99c2ea0177a5a95e2083}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.221: INFO: Pod "nginx-deployment-7b8c6f4498-4qbk2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4qbk2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-4qbk2,UID:453cc7ed-c149-429c-8e09-36f38f92b370,ResourceVersion:4062754,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291287 0x4002291288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291300} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.222: INFO: Pod "nginx-deployment-7b8c6f4498-6nq75" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6nq75,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-6nq75,UID:7b60a862-bc31-452a-9e3b-f0188d68ffa3,ResourceVersion:4062618,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x40022913e7 0x40022913e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291460} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.119,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1a16f9f639f25d490b3ea25b8ae24e245dcaa04a515d8b62868fd4639c70a624}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.223: INFO: Pod "nginx-deployment-7b8c6f4498-7wdqk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7wdqk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-7wdqk,UID:0fcf8ca2-3755-4c13-8615-29ae8d46492c,ResourceVersion:4062570,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291557 0x4002291558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40022915d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40022915f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.115,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://204b0c9bfb41e9de7d0ccce1ac7cdaf64c9d7b8d292961b90ee9ce24a1d2df74}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.224: INFO: Pod "nginx-deployment-7b8c6f4498-9q5wp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9q5wp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-9q5wp,UID:94e82d5c-6b94-42bc-a3f8-377008f9492a,ResourceVersion:4062808,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x40022916c7 0x40022916c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291740} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.225: INFO: Pod "nginx-deployment-7b8c6f4498-bvfxh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bvfxh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-bvfxh,UID:bd103443-1b76-408f-aecb-a1917a04547b,ResourceVersion:4062779,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291827 0x4002291828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40022918a0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40022918c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.226: INFO: Pod "nginx-deployment-7b8c6f4498-cvw2n" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cvw2n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-cvw2n,UID:c134975d-fd4b-4bfe-8a12-667e7734213f,ResourceVersion:4062622,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291987 0x4002291988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291a00} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.116,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5d5136d610f0943dc0064a2d83a44439826d2e63f7542a9036d97563fdbfbf03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.227: INFO: Pod "nginx-deployment-7b8c6f4498-dnc5s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dnc5s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-dnc5s,UID:8d6b8b72-4cda-4e97-83b9-4b96f828ce9c,ResourceVersion:4062821,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291af7 0x4002291af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291b70} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.228: INFO: Pod "nginx-deployment-7b8c6f4498-g7cr8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g7cr8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-g7cr8,UID:ffe3c37a-3e5d-459d-8e3f-c55fe4d51404,ResourceVersion:4062798,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291c57 0x4002291c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.229: INFO: Pod "nginx-deployment-7b8c6f4498-gvb89" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gvb89,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-gvb89,UID:a9105f5a-e855-4961-aeba-ff82df579e0f,ResourceVersion:4062776,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291db7 0x4002291db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291e30} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.230: INFO: Pod "nginx-deployment-7b8c6f4498-gzq8f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gzq8f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-gzq8f,UID:afc168b8-2d67-40f2-a4aa-bdd9a7cc2e23,ResourceVersion:4062788,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4002291f17 0x4002291f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002291f90} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002291fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.231: INFO: Pod "nginx-deployment-7b8c6f4498-j7tzc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j7tzc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-j7tzc,UID:b632c0ce-70d9-4d4d-947d-7e5dd3349477,ResourceVersion:4062611,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4001676077 0x4001676078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40016760f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001676110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.186,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://984f10e47b8ebeaa843f06b2cc10b8ea56c7b27ae59fa2912139db260ccdb43a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.232: INFO: Pod "nginx-deployment-7b8c6f4498-m8rp7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m8rp7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-m8rp7,UID:3171d480-dcb8-4896-9372-021be3a97953,ResourceVersion:4062785,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x40016761e7 0x40016761e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001676260} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001676280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.233: INFO: Pod "nginx-deployment-7b8c6f4498-mqblq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mqblq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-mqblq,UID:fbe130e5-09c1-4246-9108-7c5a2a39a9f2,ResourceVersion:4062616,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4001676347 0x4001676348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40016763c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40016763e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.120,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://af1175b07e08c01ccb3a35a09484f0d27ec64c4c726b1db446ef6be5d555545a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.234: INFO: Pod "nginx-deployment-7b8c6f4498-rb68k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rb68k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-rb68k,UID:539a3573-46a9-4e91-9c4a-bad97d4d21dd,ResourceVersion:4062595,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x40016764b7 0x40016764b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001676530} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001676550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.182,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://510f21057765e2c516890bf00b2f674d183d66ce4f51c8671ecec264529eb895}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.235: INFO: Pod "nginx-deployment-7b8c6f4498-sc2h9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sc2h9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-sc2h9,UID:fe7c7d77-a362-47b5-a7c1-342b774c04cc,ResourceVersion:4062824,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4001676627 0x4001676628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001676750} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001676770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.236: INFO: Pod "nginx-deployment-7b8c6f4498-xhmzg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xhmzg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-xhmzg,UID:eb1aaefa-90b5-4d35-bf26-56997e424733,ResourceVersion:4062767,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x40016768c7 0x40016768c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40016769d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40016769f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.237: INFO: Pod "nginx-deployment-7b8c6f4498-xshc2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xshc2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-xshc2,UID:cfd5f3d7-e3a5-420e-a629-4edbc34feef6,ResourceVersion:4062805,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4001676af7 0x4001676af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001676ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001676c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.237: INFO: Pod "nginx-deployment-7b8c6f4498-xwnkg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xwnkg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-xwnkg,UID:9271fa56-018c-4147-8ad8-2d8ea59901e0,ResourceVersion:4062796,Generation:0,CreationTimestamp:2020-08-30 17:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4001676dc7 0x4001676dc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001676e70} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001676e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-30 17:05:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 30 17:05:31.238: INFO: Pod "nginx-deployment-7b8c6f4498-z4gms" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z4gms,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4600,SelfLink:/api/v1/namespaces/deployment-4600/pods/nginx-deployment-7b8c6f4498-z4gms,UID:1208f871-83d1-48de-8789-80d194aaba34,ResourceVersion:4062625,Generation:0,CreationTimestamp:2020-08-30 17:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8ab50557-42ba-4b9f-af2d-a340461dd935 0x4001676fa7 0x4001676fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zwftx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zwftx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zwftx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001677070} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001677090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:05:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.118,StartTime:2020-08-30 17:05:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-30 17:05:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a85f070490ef5342f3dad8bb5478bed65e3771f4b91547c90e591d8a3c1764ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:05:31.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4600" for this suite.
Aug 30 17:06:07.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:06:07.510: INFO: namespace deployment-4600 deletion completed in 36.205646564s

• [SLOW TEST:56.569 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:06:07.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 30 17:06:07.621: INFO: Waiting up to 5m0s for pod "pod-6540e599-8219-40fc-87a8-fd260dfd5f4a" in namespace "emptydir-8681" to be "success or failure"
Aug 30 17:06:07.638: INFO: Pod "pod-6540e599-8219-40fc-87a8-fd260dfd5f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.183081ms
Aug 30 17:06:09.984: INFO: Pod "pod-6540e599-8219-40fc-87a8-fd260dfd5f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.362634344s
Aug 30 17:06:11.991: INFO: Pod "pod-6540e599-8219-40fc-87a8-fd260dfd5f4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.370134037s
STEP: Saw pod success
Aug 30 17:06:11.992: INFO: Pod "pod-6540e599-8219-40fc-87a8-fd260dfd5f4a" satisfied condition "success or failure"
Aug 30 17:06:12.111: INFO: Trying to get logs from node iruya-worker pod pod-6540e599-8219-40fc-87a8-fd260dfd5f4a container test-container: 
STEP: delete the pod
Aug 30 17:06:12.248: INFO: Waiting for pod pod-6540e599-8219-40fc-87a8-fd260dfd5f4a to disappear
Aug 30 17:06:12.252: INFO: Pod pod-6540e599-8219-40fc-87a8-fd260dfd5f4a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:06:12.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8681" for this suite.
Aug 30 17:06:18.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:06:18.402: INFO: namespace emptydir-8681 deletion completed in 6.143002711s

• [SLOW TEST:10.892 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:06:18.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-0b0ec096-853c-4c33-8808-c89578efad42 in namespace container-probe-7801
Aug 30 17:06:24.717: INFO: Started pod liveness-0b0ec096-853c-4c33-8808-c89578efad42 in namespace container-probe-7801
STEP: checking the pod's current state and verifying that restartCount is present
Aug 30 17:06:24.722: INFO: Initial restart count of pod liveness-0b0ec096-853c-4c33-8808-c89578efad42 is 0
Aug 30 17:06:42.788: INFO: Restart count of pod container-probe-7801/liveness-0b0ec096-853c-4c33-8808-c89578efad42 is now 1 (18.06627547s elapsed)
Aug 30 17:07:02.873: INFO: Restart count of pod container-probe-7801/liveness-0b0ec096-853c-4c33-8808-c89578efad42 is now 2 (38.150534556s elapsed)
Aug 30 17:07:23.022: INFO: Restart count of pod container-probe-7801/liveness-0b0ec096-853c-4c33-8808-c89578efad42 is now 3 (58.299605976s elapsed)
Aug 30 17:07:43.094: INFO: Restart count of pod container-probe-7801/liveness-0b0ec096-853c-4c33-8808-c89578efad42 is now 4 (1m18.372037694s elapsed)
Aug 30 17:08:45.380: INFO: Restart count of pod container-probe-7801/liveness-0b0ec096-853c-4c33-8808-c89578efad42 is now 5 (2m20.658382474s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:08:45.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7801" for this suite.
Aug 30 17:08:51.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:08:51.586: INFO: namespace container-probe-7801 deletion completed in 6.158648006s

• [SLOW TEST:153.181 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:08:51.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 30 17:08:56.279: INFO: Successfully updated pod "labelsupdate15ed7669-6e53-435c-a001-3b29f579ea07"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:09:00.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5720" for this suite.
Aug 30 17:09:22.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:09:22.474: INFO: namespace projected-5720 deletion completed in 22.160328841s

• [SLOW TEST:30.885 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:09:22.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 30 17:09:22.570: INFO: Waiting up to 5m0s for pod "pod-f60b68ab-742b-48a4-9d66-fa69972f3df7" in namespace "emptydir-535" to be "success or failure"
Aug 30 17:09:22.590: INFO: Pod "pod-f60b68ab-742b-48a4-9d66-fa69972f3df7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.73471ms
Aug 30 17:09:24.807: INFO: Pod "pod-f60b68ab-742b-48a4-9d66-fa69972f3df7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236180274s
Aug 30 17:09:26.814: INFO: Pod "pod-f60b68ab-742b-48a4-9d66-fa69972f3df7": Phase="Running", Reason="", readiness=true. Elapsed: 4.243721751s
Aug 30 17:09:28.821: INFO: Pod "pod-f60b68ab-742b-48a4-9d66-fa69972f3df7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.250641392s
STEP: Saw pod success
Aug 30 17:09:28.821: INFO: Pod "pod-f60b68ab-742b-48a4-9d66-fa69972f3df7" satisfied condition "success or failure"
Aug 30 17:09:28.840: INFO: Trying to get logs from node iruya-worker2 pod pod-f60b68ab-742b-48a4-9d66-fa69972f3df7 container test-container: 
STEP: delete the pod
Aug 30 17:09:28.858: INFO: Waiting for pod pod-f60b68ab-742b-48a4-9d66-fa69972f3df7 to disappear
Aug 30 17:09:28.862: INFO: Pod pod-f60b68ab-742b-48a4-9d66-fa69972f3df7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:09:28.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-535" for this suite.
Aug 30 17:09:34.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:09:35.072: INFO: namespace emptydir-535 deletion completed in 6.201152149s

• [SLOW TEST:12.597 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:09:35.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-1384
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1384 to expose endpoints map[]
Aug 30 17:09:35.249: INFO: successfully validated that service endpoint-test2 in namespace services-1384 exposes endpoints map[] (5.089366ms elapsed)
STEP: Creating pod pod1 in namespace services-1384
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1384 to expose endpoints map[pod1:[80]]
Aug 30 17:09:39.331: INFO: successfully validated that service endpoint-test2 in namespace services-1384 exposes endpoints map[pod1:[80]] (4.072058857s elapsed)
STEP: Creating pod pod2 in namespace services-1384
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1384 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 30 17:09:43.529: INFO: successfully validated that service endpoint-test2 in namespace services-1384 exposes endpoints map[pod1:[80] pod2:[80]] (4.191395633s elapsed)
STEP: Deleting pod pod1 in namespace services-1384
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1384 to expose endpoints map[pod2:[80]]
Aug 30 17:09:44.569: INFO: successfully validated that service endpoint-test2 in namespace services-1384 exposes endpoints map[pod2:[80]] (1.033959762s elapsed)
STEP: Deleting pod pod2 in namespace services-1384
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1384 to expose endpoints map[]
Aug 30 17:09:44.634: INFO: successfully validated that service endpoint-test2 in namespace services-1384 exposes endpoints map[] (59.540812ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:09:44.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1384" for this suite.
Aug 30 17:10:06.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:10:06.852: INFO: namespace services-1384 deletion completed in 22.167943477s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:31.775 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:10:06.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 30 17:10:11.541: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4380 pod-service-account-1f8e42f2-6cd0-4307-a6c2-f02e3c205db1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 30 17:10:16.078: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4380 pod-service-account-1f8e42f2-6cd0-4307-a6c2-f02e3c205db1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 30 17:10:17.537: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4380 pod-service-account-1f8e42f2-6cd0-4307-a6c2-f02e3c205db1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:10:19.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4380" for this suite.
Aug 30 17:10:25.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:10:25.379: INFO: namespace svcaccounts-4380 deletion completed in 6.353272352s

• [SLOW TEST:18.523 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:10:25.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4526/configmap-test-3a512f78-78b2-4d0b-a2c3-ee33d7b75626
STEP: Creating a pod to test consume configMaps
Aug 30 17:10:25.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21" in namespace "configmap-4526" to be "success or failure"
Aug 30 17:10:25.801: INFO: Pod "pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21": Phase="Pending", Reason="", readiness=false. Elapsed: 31.044403ms
Aug 30 17:10:27.951: INFO: Pod "pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181159133s
Aug 30 17:10:30.077: INFO: Pod "pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307689302s
Aug 30 17:10:32.090: INFO: Pod "pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.320362348s
STEP: Saw pod success
Aug 30 17:10:32.090: INFO: Pod "pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21" satisfied condition "success or failure"
Aug 30 17:10:32.096: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21 container env-test: 
STEP: delete the pod
Aug 30 17:10:32.136: INFO: Waiting for pod pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21 to disappear
Aug 30 17:10:32.148: INFO: Pod pod-configmaps-cf66e741-d941-420b-b46c-d6a2f4791d21 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:10:32.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4526" for this suite.
Aug 30 17:10:38.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:10:38.328: INFO: namespace configmap-4526 deletion completed in 6.168510151s

• [SLOW TEST:12.949 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:10:38.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 30 17:10:38.545: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:38.563: INFO: Number of nodes with available pods: 0
Aug 30 17:10:38.563: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:39.573: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:39.579: INFO: Number of nodes with available pods: 0
Aug 30 17:10:39.579: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:40.625: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:40.631: INFO: Number of nodes with available pods: 0
Aug 30 17:10:40.631: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:41.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:41.671: INFO: Number of nodes with available pods: 0
Aug 30 17:10:41.671: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:42.857: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:42.933: INFO: Number of nodes with available pods: 0
Aug 30 17:10:42.933: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:43.576: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:43.598: INFO: Number of nodes with available pods: 2
Aug 30 17:10:43.598: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 30 17:10:43.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:43.632: INFO: Number of nodes with available pods: 1
Aug 30 17:10:43.632: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:44.645: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:44.651: INFO: Number of nodes with available pods: 1
Aug 30 17:10:44.651: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:45.643: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:45.650: INFO: Number of nodes with available pods: 1
Aug 30 17:10:45.651: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:46.643: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:46.648: INFO: Number of nodes with available pods: 1
Aug 30 17:10:46.648: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:47.645: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:47.651: INFO: Number of nodes with available pods: 1
Aug 30 17:10:47.651: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:48.644: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:48.651: INFO: Number of nodes with available pods: 1
Aug 30 17:10:48.651: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:49.644: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:49.650: INFO: Number of nodes with available pods: 1
Aug 30 17:10:49.650: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:50.644: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:50.650: INFO: Number of nodes with available pods: 1
Aug 30 17:10:50.650: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:51.641: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:51.647: INFO: Number of nodes with available pods: 1
Aug 30 17:10:51.647: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:52.643: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:52.649: INFO: Number of nodes with available pods: 1
Aug 30 17:10:52.649: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:53.643: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:53.649: INFO: Number of nodes with available pods: 1
Aug 30 17:10:53.649: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:54.644: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:54.651: INFO: Number of nodes with available pods: 1
Aug 30 17:10:54.651: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:55.644: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:55.649: INFO: Number of nodes with available pods: 1
Aug 30 17:10:55.649: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:10:56.642: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:10:56.647: INFO: Number of nodes with available pods: 2
Aug 30 17:10:56.647: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4935, will wait for the garbage collector to delete the pods
Aug 30 17:10:56.718: INFO: Deleting DaemonSet.extensions daemon-set took: 8.415282ms
Aug 30 17:10:57.019: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.841098ms
Aug 30 17:11:03.725: INFO: Number of nodes with available pods: 0
Aug 30 17:11:03.725: INFO: Number of running nodes: 0, number of available pods: 0
Aug 30 17:11:03.729: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4935/daemonsets","resourceVersion":"4064077"},"items":null}

Aug 30 17:11:03.732: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4935/pods","resourceVersion":"4064077"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:11:03.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4935" for this suite.
Aug 30 17:11:09.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:11:09.954: INFO: namespace daemonsets-4935 deletion completed in 6.192719784s

• [SLOW TEST:31.624 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:11:09.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 30 17:11:10.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27" in namespace "projected-9617" to be "success or failure"
Aug 30 17:11:10.090: INFO: Pod "downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27": Phase="Pending", Reason="", readiness=false. Elapsed: 58.352687ms
Aug 30 17:11:12.098: INFO: Pod "downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065881968s
Aug 30 17:11:14.104: INFO: Pod "downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27": Phase="Running", Reason="", readiness=true. Elapsed: 4.072021068s
Aug 30 17:11:16.111: INFO: Pod "downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079850183s
STEP: Saw pod success
Aug 30 17:11:16.112: INFO: Pod "downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27" satisfied condition "success or failure"
Aug 30 17:11:16.123: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27 container client-container: 
STEP: delete the pod
Aug 30 17:11:16.158: INFO: Waiting for pod downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27 to disappear
Aug 30 17:11:16.196: INFO: Pod downwardapi-volume-8d5d08e5-155c-4f23-869b-8df925eb7e27 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:11:16.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9617" for this suite.
Aug 30 17:11:22.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:11:22.355: INFO: namespace projected-9617 deletion completed in 6.150472232s

• [SLOW TEST:12.400 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:11:22.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 30 17:11:22.513: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug 30 17:11:25.712: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 30 17:11:28.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:11:30.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734404285, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:11:33.422: INFO: Waited 637.45261ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:11:33.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2142" for this suite.
Aug 30 17:11:40.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:11:40.374: INFO: namespace aggregator-2142 deletion completed in 6.521047239s

• [SLOW TEST:18.017 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:11:40.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:11:40.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3362" for this suite.
Aug 30 17:12:02.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:12:02.776: INFO: namespace pods-3362 deletion completed in 22.238920847s

• [SLOW TEST:22.401 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:12:02.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:12:09.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1127" for this suite.
Aug 30 17:12:32.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:12:32.130: INFO: namespace replication-controller-1127 deletion completed in 22.147983785s

• [SLOW TEST:29.351 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:12:32.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0830 17:13:13.034128       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 30 17:13:13.034: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:13:13.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9591" for this suite.
Aug 30 17:13:21.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:13:21.196: INFO: namespace gc-9591 deletion completed in 8.155135729s

• [SLOW TEST:49.065 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:13:21.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 30 17:13:21.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374" in namespace "projected-8725" to be "success or failure"
Aug 30 17:13:21.525: INFO: Pod "downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374": Phase="Pending", Reason="", readiness=false. Elapsed: 66.912174ms
Aug 30 17:13:23.533: INFO: Pod "downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074786891s
Aug 30 17:13:25.543: INFO: Pod "downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084146035s
Aug 30 17:13:27.549: INFO: Pod "downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374": Phase="Running", Reason="", readiness=true. Elapsed: 6.090606048s
Aug 30 17:13:29.555: INFO: Pod "downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09680449s
STEP: Saw pod success
Aug 30 17:13:29.555: INFO: Pod "downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374" satisfied condition "success or failure"
Aug 30 17:13:29.560: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374 container client-container: 
STEP: delete the pod
Aug 30 17:13:29.681: INFO: Waiting for pod downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374 to disappear
Aug 30 17:13:29.694: INFO: Pod downwardapi-volume-58742675-0cce-4d71-ac62-ef011ea45374 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:13:29.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8725" for this suite.
Aug 30 17:13:35.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:13:35.859: INFO: namespace projected-8725 deletion completed in 6.15646656s

• [SLOW TEST:14.661 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:13:35.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 30 17:13:36.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:36.040: INFO: Number of nodes with available pods: 0
Aug 30 17:13:36.040: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:37.071: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:37.079: INFO: Number of nodes with available pods: 0
Aug 30 17:13:37.079: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:38.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:38.183: INFO: Number of nodes with available pods: 0
Aug 30 17:13:38.183: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:39.061: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:39.092: INFO: Number of nodes with available pods: 0
Aug 30 17:13:39.092: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:40.139: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:40.145: INFO: Number of nodes with available pods: 1
Aug 30 17:13:40.145: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 30 17:13:41.049: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:41.055: INFO: Number of nodes with available pods: 2
Aug 30 17:13:41.055: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 30 17:13:41.084: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:41.118: INFO: Number of nodes with available pods: 1
Aug 30 17:13:41.118: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:42.299: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:42.388: INFO: Number of nodes with available pods: 1
Aug 30 17:13:42.388: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:43.130: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:43.136: INFO: Number of nodes with available pods: 1
Aug 30 17:13:43.136: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:44.147: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:44.153: INFO: Number of nodes with available pods: 1
Aug 30 17:13:44.153: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:13:45.130: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:13:45.136: INFO: Number of nodes with available pods: 2
Aug 30 17:13:45.136: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6754, will wait for the garbage collector to delete the pods
Aug 30 17:13:45.206: INFO: Deleting DaemonSet.extensions daemon-set took: 9.385763ms
Aug 30 17:13:45.507: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.999651ms
Aug 30 17:13:53.714: INFO: Number of nodes with available pods: 0
Aug 30 17:13:53.714: INFO: Number of running nodes: 0, number of available pods: 0
Aug 30 17:13:53.719: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6754/daemonsets","resourceVersion":"4064835"},"items":null}

Aug 30 17:13:53.723: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6754/pods","resourceVersion":"4064835"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:13:53.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6754" for this suite.
Aug 30 17:13:59.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:13:59.966: INFO: namespace daemonsets-6754 deletion completed in 6.212430147s

• [SLOW TEST:24.106 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:13:59.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-3a044282-bf9f-490d-b74a-6d51a47badeb
STEP: Creating a pod to test consume secrets
Aug 30 17:14:00.071: INFO: Waiting up to 5m0s for pod "pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438" in namespace "secrets-2768" to be "success or failure"
Aug 30 17:14:00.096: INFO: Pod "pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438": Phase="Pending", Reason="", readiness=false. Elapsed: 24.987622ms
Aug 30 17:14:02.103: INFO: Pod "pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03234068s
Aug 30 17:14:04.108: INFO: Pod "pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438": Phase="Running", Reason="", readiness=true. Elapsed: 4.037759859s
Aug 30 17:14:06.116: INFO: Pod "pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04495721s
STEP: Saw pod success
Aug 30 17:14:06.116: INFO: Pod "pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438" satisfied condition "success or failure"
Aug 30 17:14:06.120: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438 container secret-volume-test: 
STEP: delete the pod
Aug 30 17:14:06.153: INFO: Waiting for pod pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438 to disappear
Aug 30 17:14:06.229: INFO: Pod pod-secrets-df53d54a-4ab9-4373-b9a0-259b4045a438 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:14:06.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2768" for this suite.
Aug 30 17:14:12.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:14:12.436: INFO: namespace secrets-2768 deletion completed in 6.181771487s

• [SLOW TEST:12.469 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:14:12.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-6340ab18-8cbf-43e9-ae57-ea97615a3e51
STEP: Creating a pod to test consume secrets
Aug 30 17:14:12.567: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8" in namespace "projected-2421" to be "success or failure"
Aug 30 17:14:12.591: INFO: Pod "pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.56197ms
Aug 30 17:14:14.638: INFO: Pod "pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070481086s
Aug 30 17:14:16.644: INFO: Pod "pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076403884s
Aug 30 17:14:18.651: INFO: Pod "pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084085146s
STEP: Saw pod success
Aug 30 17:14:18.652: INFO: Pod "pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8" satisfied condition "success or failure"
Aug 30 17:14:18.677: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8 container projected-secret-volume-test: 
STEP: delete the pod
Aug 30 17:14:18.703: INFO: Waiting for pod pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8 to disappear
Aug 30 17:14:18.709: INFO: Pod pod-projected-secrets-be144073-2de6-47c5-85fe-8f86fc04c1c8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:14:18.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2421" for this suite.
Aug 30 17:14:24.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:14:24.868: INFO: namespace projected-2421 deletion completed in 6.15075644s

• [SLOW TEST:12.431 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:14:24.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug 30 17:14:25.008: INFO: Waiting up to 5m0s for pod "var-expansion-18924198-2da3-46c9-bad2-225dd7492200" in namespace "var-expansion-91" to be "success or failure"
Aug 30 17:14:25.021: INFO: Pod "var-expansion-18924198-2da3-46c9-bad2-225dd7492200": Phase="Pending", Reason="", readiness=false. Elapsed: 12.870928ms
Aug 30 17:14:27.183: INFO: Pod "var-expansion-18924198-2da3-46c9-bad2-225dd7492200": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175347256s
Aug 30 17:14:29.191: INFO: Pod "var-expansion-18924198-2da3-46c9-bad2-225dd7492200": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183069129s
STEP: Saw pod success
Aug 30 17:14:29.191: INFO: Pod "var-expansion-18924198-2da3-46c9-bad2-225dd7492200" satisfied condition "success or failure"
Aug 30 17:14:29.207: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-18924198-2da3-46c9-bad2-225dd7492200 container dapi-container: 
STEP: delete the pod
Aug 30 17:14:29.232: INFO: Waiting for pod var-expansion-18924198-2da3-46c9-bad2-225dd7492200 to disappear
Aug 30 17:14:29.236: INFO: Pod var-expansion-18924198-2da3-46c9-bad2-225dd7492200 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:14:29.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-91" for this suite.
Aug 30 17:14:35.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:14:35.409: INFO: namespace var-expansion-91 deletion completed in 6.166569612s

• [SLOW TEST:10.539 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:14:35.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0830 17:15:05.614354       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 30 17:15:05.614: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:15:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6180" for this suite.
Aug 30 17:15:11.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:15:11.776: INFO: namespace gc-6180 deletion completed in 6.153643915s

• [SLOW TEST:36.367 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:15:11.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 30 17:15:12.527: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065145,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 30 17:15:12.527: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065145,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 30 17:15:22.539: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065165,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 30 17:15:22.540: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065165,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 30 17:15:32.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065187,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 30 17:15:32.554: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065187,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 30 17:15:42.564: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065207,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 30 17:15:42.565: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-a,UID:d4a85f6e-971d-4e85-ad7d-e0b299ee131c,ResourceVersion:4065207,Generation:0,CreationTimestamp:2020-08-30 17:15:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 30 17:15:52.576: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-b,UID:b1256f13-284a-46b4-9635-cb01a3cfa5e2,ResourceVersion:4065226,Generation:0,CreationTimestamp:2020-08-30 17:15:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 30 17:15:52.577: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-b,UID:b1256f13-284a-46b4-9635-cb01a3cfa5e2,ResourceVersion:4065226,Generation:0,CreationTimestamp:2020-08-30 17:15:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 30 17:16:02.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-b,UID:b1256f13-284a-46b4-9635-cb01a3cfa5e2,ResourceVersion:4065247,Generation:0,CreationTimestamp:2020-08-30 17:15:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 30 17:16:02.589: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4400,SelfLink:/api/v1/namespaces/watch-4400/configmaps/e2e-watch-test-configmap-b,UID:b1256f13-284a-46b4-9635-cb01a3cfa5e2,ResourceVersion:4065247,Generation:0,CreationTimestamp:2020-08-30 17:15:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:16:12.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4400" for this suite.
Aug 30 17:16:18.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:16:18.760: INFO: namespace watch-4400 deletion completed in 6.156200504s

• [SLOW TEST:66.981 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:16:18.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 30 17:16:19.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749" in namespace "downward-api-743" to be "success or failure"
Aug 30 17:16:19.161: INFO: Pod "downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749": Phase="Pending", Reason="", readiness=false. Elapsed: 19.596569ms
Aug 30 17:16:21.169: INFO: Pod "downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027203607s
Aug 30 17:16:23.197: INFO: Pod "downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054861846s
STEP: Saw pod success
Aug 30 17:16:23.197: INFO: Pod "downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749" satisfied condition "success or failure"
Aug 30 17:16:23.202: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749 container client-container: 
STEP: delete the pod
Aug 30 17:16:23.243: INFO: Waiting for pod downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749 to disappear
Aug 30 17:16:23.560: INFO: Pod downwardapi-volume-c6ab741f-4c6c-4490-adb1-0cc44f3a7749 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:16:23.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-743" for this suite.
Aug 30 17:16:29.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:16:29.901: INFO: namespace downward-api-743 deletion completed in 6.275937419s

• [SLOW TEST:11.139 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:16:29.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 30 17:16:29.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48" in namespace "downward-api-6822" to be "success or failure"
Aug 30 17:16:30.003: INFO: Pod "downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48": Phase="Pending", Reason="", readiness=false. Elapsed: 18.086399ms
Aug 30 17:16:32.348: INFO: Pod "downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363145183s
Aug 30 17:16:34.413: INFO: Pod "downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.427726966s
STEP: Saw pod success
Aug 30 17:16:34.413: INFO: Pod "downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48" satisfied condition "success or failure"
Aug 30 17:16:34.417: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48 container client-container: 
STEP: delete the pod
Aug 30 17:16:34.569: INFO: Waiting for pod downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48 to disappear
Aug 30 17:16:34.634: INFO: Pod downwardapi-volume-052be567-a4b7-4542-a1c9-a90569cb9f48 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:16:34.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6822" for this suite.
Aug 30 17:16:40.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:16:40.877: INFO: namespace downward-api-6822 deletion completed in 6.233860098s

• [SLOW TEST:10.976 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:16:40.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 30 17:16:40.994: INFO: Waiting up to 5m0s for pod "pod-c174825e-9d91-4301-bddc-d1de3d0cb53d" in namespace "emptydir-5710" to be "success or failure"
Aug 30 17:16:41.024: INFO: Pod "pod-c174825e-9d91-4301-bddc-d1de3d0cb53d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.423328ms
Aug 30 17:16:43.160: INFO: Pod "pod-c174825e-9d91-4301-bddc-d1de3d0cb53d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165557237s
Aug 30 17:16:45.167: INFO: Pod "pod-c174825e-9d91-4301-bddc-d1de3d0cb53d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172214551s
STEP: Saw pod success
Aug 30 17:16:45.167: INFO: Pod "pod-c174825e-9d91-4301-bddc-d1de3d0cb53d" satisfied condition "success or failure"
Aug 30 17:16:45.173: INFO: Trying to get logs from node iruya-worker pod pod-c174825e-9d91-4301-bddc-d1de3d0cb53d container test-container: 
STEP: delete the pod
Aug 30 17:16:45.274: INFO: Waiting for pod pod-c174825e-9d91-4301-bddc-d1de3d0cb53d to disappear
Aug 30 17:16:45.305: INFO: Pod pod-c174825e-9d91-4301-bddc-d1de3d0cb53d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:16:45.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5710" for this suite.
Aug 30 17:16:51.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:16:51.473: INFO: namespace emptydir-5710 deletion completed in 6.157699257s

• [SLOW TEST:10.593 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:16:51.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-2a136e2b-4318-4223-a6ed-3ee1904a1f42
STEP: Creating a pod to test consume configMaps
Aug 30 17:16:51.613: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2" in namespace "projected-7211" to be "success or failure"
Aug 30 17:16:51.642: INFO: Pod "pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.224877ms
Aug 30 17:16:53.649: INFO: Pod "pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035400035s
Aug 30 17:16:55.654: INFO: Pod "pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041056642s
Aug 30 17:16:57.660: INFO: Pod "pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046133833s
STEP: Saw pod success
Aug 30 17:16:57.660: INFO: Pod "pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2" satisfied condition "success or failure"
Aug 30 17:16:57.663: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 30 17:16:57.773: INFO: Waiting for pod pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2 to disappear
Aug 30 17:16:57.782: INFO: Pod pod-projected-configmaps-dc74bf71-4e4f-46fe-8b87-02a5db1a62f2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:16:57.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7211" for this suite.
Aug 30 17:17:03.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:17:04.004: INFO: namespace projected-7211 deletion completed in 6.2133112s

• [SLOW TEST:12.529 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:17:04.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-1e9c1b36-94b1-40c6-b9cd-787d7e4a5688 in namespace container-probe-8698
Aug 30 17:17:10.126: INFO: Started pod test-webserver-1e9c1b36-94b1-40c6-b9cd-787d7e4a5688 in namespace container-probe-8698
STEP: checking the pod's current state and verifying that restartCount is present
Aug 30 17:17:10.129: INFO: Initial restart count of pod test-webserver-1e9c1b36-94b1-40c6-b9cd-787d7e4a5688 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:21:11.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8698" for this suite.
Aug 30 17:21:17.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:21:17.662: INFO: namespace container-probe-8698 deletion completed in 6.188801472s

• [SLOW TEST:253.655 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:21:17.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:21:17.796: INFO: Create a RollingUpdate DaemonSet
Aug 30 17:21:17.801: INFO: Check that daemon pods launch on every node of the cluster
Aug 30 17:21:17.832: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:17.842: INFO: Number of nodes with available pods: 0
Aug 30 17:21:17.842: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:21:18.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:18.855: INFO: Number of nodes with available pods: 0
Aug 30 17:21:18.855: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:21:20.231: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:20.351: INFO: Number of nodes with available pods: 0
Aug 30 17:21:20.352: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:21:20.934: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:20.940: INFO: Number of nodes with available pods: 0
Aug 30 17:21:20.940: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:21:21.855: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:21.862: INFO: Number of nodes with available pods: 0
Aug 30 17:21:21.862: INFO: Node iruya-worker is running more than one daemon pod
Aug 30 17:21:22.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:22.909: INFO: Number of nodes with available pods: 2
Aug 30 17:21:22.909: INFO: Number of running nodes: 2, number of available pods: 2
Aug 30 17:21:22.909: INFO: Update the DaemonSet to trigger a rollout
Aug 30 17:21:22.922: INFO: Updating DaemonSet daemon-set
Aug 30 17:21:27.969: INFO: Roll back the DaemonSet before rollout is complete
Aug 30 17:21:27.977: INFO: Updating DaemonSet daemon-set
Aug 30 17:21:27.977: INFO: Make sure DaemonSet rollback is complete
Aug 30 17:21:27.988: INFO: Wrong image for pod: daemon-set-cvb9l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 30 17:21:27.988: INFO: Pod daemon-set-cvb9l is not available
Aug 30 17:21:28.033: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:29.040: INFO: Wrong image for pod: daemon-set-cvb9l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 30 17:21:29.040: INFO: Pod daemon-set-cvb9l is not available
Aug 30 17:21:29.050: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:30.058: INFO: Wrong image for pod: daemon-set-cvb9l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 30 17:21:30.058: INFO: Pod daemon-set-cvb9l is not available
Aug 30 17:21:30.222: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 30 17:21:31.041: INFO: Pod daemon-set-5j429 is not available
Aug 30 17:21:31.051: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3936, will wait for the garbage collector to delete the pods
Aug 30 17:21:31.173: INFO: Deleting DaemonSet.extensions daemon-set took: 8.180661ms
Aug 30 17:21:31.574: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.755546ms
Aug 30 17:21:37.279: INFO: Number of nodes with available pods: 0
Aug 30 17:21:37.279: INFO: Number of running nodes: 0, number of available pods: 0
Aug 30 17:21:37.282: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3936/daemonsets","resourceVersion":"4066098"},"items":null}

Aug 30 17:21:37.286: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3936/pods","resourceVersion":"4066098"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:21:37.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3936" for this suite.
Aug 30 17:21:43.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:21:43.482: INFO: namespace daemonsets-3936 deletion completed in 6.174968513s

• [SLOW TEST:25.819 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:21:43.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0830 17:21:53.608644       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 30 17:21:53.609: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:21:53.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1159" for this suite.
Aug 30 17:21:59.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:21:59.773: INFO: namespace gc-1159 deletion completed in 6.154820174s

• [SLOW TEST:16.290 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:21:59.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 30 17:22:05.419: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:22:05.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1554" for this suite.
Aug 30 17:22:11.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:22:11.762: INFO: namespace container-runtime-1554 deletion completed in 6.170461951s

• [SLOW TEST:11.987 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:22:11.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:23:11.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3412" for this suite.
Aug 30 17:23:33.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:23:34.088: INFO: namespace container-probe-3412 deletion completed in 22.198222324s

• [SLOW TEST:82.322 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:23:34.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:24:00.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1646" for this suite.
Aug 30 17:24:06.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:24:06.508: INFO: namespace namespaces-1646 deletion completed in 6.158380841s
STEP: Destroying namespace "nsdeletetest-7462" for this suite.
Aug 30 17:24:06.511: INFO: Namespace nsdeletetest-7462 was already deleted
STEP: Destroying namespace "nsdeletetest-6725" for this suite.
Aug 30 17:24:12.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:24:12.682: INFO: namespace nsdeletetest-6725 deletion completed in 6.169930945s

• [SLOW TEST:38.593 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:24:12.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Aug 30 17:24:12.759: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix970599569/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:24:13.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8551" for this suite.
Aug 30 17:24:19.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:24:19.997: INFO: namespace kubectl-8551 deletion completed in 6.16250175s

• [SLOW TEST:7.310 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:24:19.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-1526b238-f109-4b1d-a5b8-2005657d6313
STEP: Creating secret with name s-test-opt-upd-6c3a8eec-9f12-4c5e-a4f0-fab3e4cf7eee
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1526b238-f109-4b1d-a5b8-2005657d6313
STEP: Updating secret s-test-opt-upd-6c3a8eec-9f12-4c5e-a4f0-fab3e4cf7eee
STEP: Creating secret with name s-test-opt-create-e59cc4b2-e706-4e95-8798-c1a135b4677f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:25:34.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-510" for this suite.
Aug 30 17:25:58.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:25:58.767: INFO: namespace projected-510 deletion completed in 24.164108355s

• [SLOW TEST:98.769 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:25:58.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6992.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6992.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 30 17:26:07.034: INFO: DNS probes using dns-test-488fdd60-53a6-46e8-9737-3b215dae06fc succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6992.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6992.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 30 17:26:15.676: INFO: File wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:15.680: INFO: File jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:15.680: INFO: Lookups using dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 failed for: [wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local]

Aug 30 17:26:20.688: INFO: File wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:20.694: INFO: File jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:20.694: INFO: Lookups using dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 failed for: [wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local]

Aug 30 17:26:25.687: INFO: File wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:25.691: INFO: File jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:25.691: INFO: Lookups using dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 failed for: [wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local]

Aug 30 17:26:30.687: INFO: File wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:30.691: INFO: File jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:30.691: INFO: Lookups using dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 failed for: [wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local]

Aug 30 17:26:35.687: INFO: File wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local from pod  dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 30 17:26:35.691: INFO: Lookups using dns-6992/dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 failed for: [wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local]

Aug 30 17:26:40.693: INFO: DNS probes using dns-test-1d9709be-9fdd-49fa-8fb1-cd24a899ff68 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6992.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6992.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6992.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6992.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 30 17:26:49.986: INFO: DNS probes using dns-test-aec84173-ca14-4925-85fc-ff632ea5312b succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:26:50.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6992" for this suite.
Aug 30 17:26:56.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:26:56.270: INFO: namespace dns-6992 deletion completed in 6.157926211s

• [SLOW TEST:57.501 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:26:56.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Aug 30 17:26:56.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5882'
Aug 30 17:27:01.156: INFO: stderr: ""
Aug 30 17:27:01.157: INFO: stdout: "pod/pause created\n"
Aug 30 17:27:01.157: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 30 17:27:01.158: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5882" to be "running and ready"
Aug 30 17:27:01.167: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.78735ms
Aug 30 17:27:03.173: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015459957s
Aug 30 17:27:05.179: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.021165025s
Aug 30 17:27:05.179: INFO: Pod "pause" satisfied condition "running and ready"
Aug 30 17:27:05.180: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 30 17:27:05.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5882'
Aug 30 17:27:06.499: INFO: stderr: ""
Aug 30 17:27:06.499: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 30 17:27:06.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5882'
Aug 30 17:27:07.820: INFO: stderr: ""
Aug 30 17:27:07.820: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 30 17:27:07.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5882'
Aug 30 17:27:09.082: INFO: stderr: ""
Aug 30 17:27:09.083: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 30 17:27:09.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5882'
Aug 30 17:27:10.354: INFO: stderr: ""
Aug 30 17:27:10.354: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Aug 30 17:27:10.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5882'
Aug 30 17:27:11.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 30 17:27:11.614: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 30 17:27:11.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5882'
Aug 30 17:27:12.902: INFO: stderr: "No resources found.\n"
Aug 30 17:27:12.902: INFO: stdout: ""
Aug 30 17:27:12.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5882 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 30 17:27:14.182: INFO: stderr: ""
Aug 30 17:27:14.183: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:27:14.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5882" for this suite.
Aug 30 17:27:20.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:27:20.356: INFO: namespace kubectl-5882 deletion completed in 6.165006427s

• [SLOW TEST:24.081 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:27:20.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 30 17:27:20.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7186'
Aug 30 17:27:21.824: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 30 17:27:21.824: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug 30 17:27:21.845: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-wq2cf]
Aug 30 17:27:21.845: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-wq2cf" in namespace "kubectl-7186" to be "running and ready"
Aug 30 17:27:21.860: INFO: Pod "e2e-test-nginx-rc-wq2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.521295ms
Aug 30 17:27:23.868: INFO: Pod "e2e-test-nginx-rc-wq2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022797168s
Aug 30 17:27:25.875: INFO: Pod "e2e-test-nginx-rc-wq2cf": Phase="Running", Reason="", readiness=true. Elapsed: 4.030166015s
Aug 30 17:27:25.875: INFO: Pod "e2e-test-nginx-rc-wq2cf" satisfied condition "running and ready"
Aug 30 17:27:25.875: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-wq2cf]
Aug 30 17:27:25.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7186'
Aug 30 17:27:27.236: INFO: stderr: ""
Aug 30 17:27:27.236: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Aug 30 17:27:27.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7186'
Aug 30 17:27:28.640: INFO: stderr: ""
Aug 30 17:27:28.640: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:27:28.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7186" for this suite.
Aug 30 17:27:34.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:27:34.884: INFO: namespace kubectl-7186 deletion completed in 6.218663171s

• [SLOW TEST:14.526 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:27:34.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:27:35.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8676'
Aug 30 17:27:36.717: INFO: stderr: ""
Aug 30 17:27:36.717: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 30 17:27:36.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8676'
Aug 30 17:27:39.229: INFO: stderr: ""
Aug 30 17:27:39.229: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 30 17:27:40.237: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:27:40.237: INFO: Found 0 / 1
Aug 30 17:27:41.235: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:27:41.236: INFO: Found 1 / 1
Aug 30 17:27:41.236: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 30 17:27:41.242: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:27:41.242: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 30 17:27:41.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-kcdt8 --namespace=kubectl-8676'
Aug 30 17:27:42.660: INFO: stderr: ""
Aug 30 17:27:42.660: INFO: stdout: "Name:           redis-master-kcdt8\nNamespace:      kubectl-8676\nPriority:       0\nNode:           iruya-worker/172.18.0.9\nStart Time:     Sun, 30 Aug 2020 17:27:36 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.170\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://277e1e4cd4e3c82fa3bf8f578116754a27bfb84c3535332fcc2886f737277079\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 30 Aug 2020 17:27:40 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sp2sk (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-sp2sk:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-sp2sk\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  6s    default-scheduler      Successfully assigned kubectl-8676/redis-master-kcdt8 to iruya-worker\n  Normal  Pulled     4s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-worker  Started container redis-master\n"
Aug 30 17:27:42.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8676'
Aug 30 17:27:44.126: INFO: stderr: ""
Aug 30 17:27:44.126: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8676\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-kcdt8\n"
Aug 30 17:27:44.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8676'
Aug 30 17:27:45.463: INFO: stderr: ""
Aug 30 17:27:45.463: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8676\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.233.176\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.170:6379\nSession Affinity:  None\nEvents:            \n"
Aug 30 17:27:45.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug 30 17:27:46.888: INFO: stderr: ""
Aug 30 17:27:46.889: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:34:51 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sun, 30 Aug 2020 17:27:27 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sun, 30 Aug 2020 17:27:27 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sun, 30 Aug 2020 17:27:27 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sun, 30 Aug 2020 17:27:27 +0000   Sat, 15 Aug 2020 09:35:31 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.7\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 3ed9130db08840259d2231bd97220883\n System UUID:                e52cc602-b019-45cd-b06f-235cc5705532\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-6krdd                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     15d\n  kube-system                coredns-5d4dd4b4db-htp88                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     15d\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                kindnet-gvnsh                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      15d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                kube-proxy-ndl9h                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         15d\n  local-path-storage         local-path-provisioner-668779bd7-g227z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 30 17:27:46.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8676'
Aug 30 17:27:48.226: INFO: stderr: ""
Aug 30 17:27:48.226: INFO: stdout: "Name:         kubectl-8676\nLabels:       e2e-framework=kubectl\n              e2e-run=8a135acd-3c95-4211-a475-8eba91622e1c\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:27:48.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8676" for this suite.
Aug 30 17:28:10.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:28:10.397: INFO: namespace kubectl-8676 deletion completed in 22.160222043s

• [SLOW TEST:35.512 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:28:10.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-76b84721-3a8c-477f-a7d3-8ae04fa6447a
STEP: Creating a pod to test consume configMaps
Aug 30 17:28:10.484: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5" in namespace "projected-7151" to be "success or failure"
Aug 30 17:28:10.503: INFO: Pod "pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.214264ms
Aug 30 17:28:12.510: INFO: Pod "pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025815923s
Aug 30 17:28:14.524: INFO: Pod "pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040318119s
Aug 30 17:28:16.566: INFO: Pod "pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082534811s
STEP: Saw pod success
Aug 30 17:28:16.567: INFO: Pod "pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5" satisfied condition "success or failure"
Aug 30 17:28:16.573: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 30 17:28:16.590: INFO: Waiting for pod pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5 to disappear
Aug 30 17:28:16.594: INFO: Pod pod-projected-configmaps-bf325f36-3e9a-4c00-be72-117e216081a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:28:16.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7151" for this suite.
Aug 30 17:28:22.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:28:22.755: INFO: namespace projected-7151 deletion completed in 6.152955299s

• [SLOW TEST:12.356 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:28:22.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:28:22.908: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 30 17:28:33.795: INFO: Successfully updated pod "pod-update-activedeadlineseconds-753cf8e9-d5a3-4ce5-bc98-5216623f4105"
Aug 30 17:28:33.796: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-753cf8e9-d5a3-4ce5-bc98-5216623f4105" in namespace "pods-6363" to be "terminated due to deadline exceeded"
Aug 30 17:28:33.812: INFO: Pod "pod-update-activedeadlineseconds-753cf8e9-d5a3-4ce5-bc98-5216623f4105": Phase="Running", Reason="", readiness=true. Elapsed: 16.050887ms
Aug 30 17:28:35.819: INFO: Pod "pod-update-activedeadlineseconds-753cf8e9-d5a3-4ce5-bc98-5216623f4105": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02328499s
Aug 30 17:28:35.820: INFO: Pod "pod-update-activedeadlineseconds-753cf8e9-d5a3-4ce5-bc98-5216623f4105" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:28:35.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6363" for this suite.
Aug 30 17:28:41.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:28:42.019: INFO: namespace pods-6363 deletion completed in 6.190062139s

• [SLOW TEST:12.854 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:28:42.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 30 17:28:42.128: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf" in namespace "projected-360" to be "success or failure"
Aug 30 17:28:42.160: INFO: Pod "downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 31.56413ms
Aug 30 17:28:44.421: INFO: Pod "downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292368219s
Aug 30 17:28:46.428: INFO: Pod "downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.299517754s
STEP: Saw pod success
Aug 30 17:28:46.428: INFO: Pod "downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf" satisfied condition "success or failure"
Aug 30 17:28:46.433: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf container client-container: 
STEP: delete the pod
Aug 30 17:28:46.493: INFO: Waiting for pod downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf to disappear
Aug 30 17:28:46.551: INFO: Pod downwardapi-volume-20930409-2b16-4844-9b35-00b23983bfbf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:28:46.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-360" for this suite.
Aug 30 17:28:52.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:28:52.775: INFO: namespace projected-360 deletion completed in 6.149777344s

• [SLOW TEST:10.754 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:28:52.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 30 17:28:52.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6490'
Aug 30 17:28:54.264: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 30 17:28:54.264: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug 30 17:28:54.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6490'
Aug 30 17:28:55.633: INFO: stderr: ""
Aug 30 17:28:55.633: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:28:55.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6490" for this suite.
Aug 30 17:29:18.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:29:18.501: INFO: namespace kubectl-6490 deletion completed in 22.413362666s

• [SLOW TEST:25.724 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:29:18.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-30f20621-859f-4dcc-b845-bdf59d301501
STEP: Creating a pod to test consume configMaps
Aug 30 17:29:18.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b" in namespace "configmap-8486" to be "success or failure"
Aug 30 17:29:18.621: INFO: Pod "pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.82961ms
Aug 30 17:29:20.628: INFO: Pod "pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015412048s
Aug 30 17:29:22.636: INFO: Pod "pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b": Phase="Running", Reason="", readiness=true. Elapsed: 4.023522528s
Aug 30 17:29:24.643: INFO: Pod "pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031257736s
STEP: Saw pod success
Aug 30 17:29:24.644: INFO: Pod "pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b" satisfied condition "success or failure"
Aug 30 17:29:24.649: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b container configmap-volume-test: 
STEP: delete the pod
Aug 30 17:29:24.685: INFO: Waiting for pod pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b to disappear
Aug 30 17:29:24.694: INFO: Pod pod-configmaps-44856745-b0bb-4af8-9820-4d9ca0b6945b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:29:24.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8486" for this suite.
Aug 30 17:29:30.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:29:30.866: INFO: namespace configmap-8486 deletion completed in 6.162713124s

• [SLOW TEST:12.364 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:29:30.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-31e7fec8-6e7d-493a-88f6-89350513c1ae
STEP: Creating a pod to test consume configMaps
Aug 30 17:29:31.010: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3" in namespace "configmap-7696" to be "success or failure"
Aug 30 17:29:31.018: INFO: Pod "pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139916ms
Aug 30 17:29:33.028: INFO: Pod "pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017802113s
Aug 30 17:29:35.035: INFO: Pod "pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024492349s
Aug 30 17:29:37.042: INFO: Pod "pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031711027s
STEP: Saw pod success
Aug 30 17:29:37.042: INFO: Pod "pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3" satisfied condition "success or failure"
Aug 30 17:29:37.065: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3 container configmap-volume-test: 
STEP: delete the pod
Aug 30 17:29:37.097: INFO: Waiting for pod pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3 to disappear
Aug 30 17:29:37.108: INFO: Pod pod-configmaps-a6c3f1a2-3937-4858-add4-3700822623d3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:29:37.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7696" for this suite.
Aug 30 17:29:43.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:29:43.296: INFO: namespace configmap-7696 deletion completed in 6.1781156s

• [SLOW TEST:12.428 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:29:43.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5605.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5605.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5605.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5605.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 30 17:29:51.505: INFO: DNS probes using dns-5605/dns-test-7f6a1c13-a7c8-4824-a059-a1df9b009c1d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:29:51.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5605" for this suite.
Aug 30 17:29:57.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:29:57.748: INFO: namespace dns-5605 deletion completed in 6.19332893s

• [SLOW TEST:14.450 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:29:57.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 30 17:29:57.892: INFO: namespace kubectl-6652
Aug 30 17:29:57.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6652'
Aug 30 17:29:59.551: INFO: stderr: ""
Aug 30 17:29:59.552: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 30 17:30:00.559: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:30:00.559: INFO: Found 0 / 1
Aug 30 17:30:01.560: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:30:01.560: INFO: Found 0 / 1
Aug 30 17:30:02.560: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:30:02.560: INFO: Found 0 / 1
Aug 30 17:30:03.560: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:30:03.560: INFO: Found 1 / 1
Aug 30 17:30:03.560: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 30 17:30:03.566: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:30:03.566: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 30 17:30:03.567: INFO: wait on redis-master startup in kubectl-6652 
Aug 30 17:30:03.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-shd2d redis-master --namespace=kubectl-6652'
Aug 30 17:30:04.907: INFO: stderr: ""
Aug 30 17:30:04.908: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Aug 17:30:02.481 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Aug 17:30:02.481 # Server started, Redis version 3.2.12\n1:M 30 Aug 17:30:02.481 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Aug 17:30:02.481 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 30 17:30:04.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6652'
Aug 30 17:30:06.311: INFO: stderr: ""
Aug 30 17:30:06.312: INFO: stdout: "service/rm2 exposed\n"
Aug 30 17:30:06.340: INFO: Service rm2 in namespace kubectl-6652 found.
STEP: exposing service
Aug 30 17:30:08.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6652'
Aug 30 17:30:09.731: INFO: stderr: ""
Aug 30 17:30:09.732: INFO: stdout: "service/rm3 exposed\n"
Aug 30 17:30:09.768: INFO: Service rm3 in namespace kubectl-6652 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:30:11.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6652" for this suite.
Aug 30 17:30:35.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:30:35.986: INFO: namespace kubectl-6652 deletion completed in 24.196710264s

• [SLOW TEST:38.236 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:30:35.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 30 17:30:40.702: INFO: Successfully updated pod "pod-update-a18d081a-43c2-465f-ac47-ec6b9200444d"
STEP: verifying the updated pod is in kubernetes
Aug 30 17:30:40.712: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:30:40.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7013" for this suite.
Aug 30 17:31:02.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:31:02.895: INFO: namespace pods-7013 deletion completed in 22.176829892s

• [SLOW TEST:26.909 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:31:02.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-a55cf403-c63a-4034-b442-2166e1d30950
STEP: Creating secret with name secret-projected-all-test-volume-f86833aa-4bb7-4c5c-b638-3b7271736d10
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 30 17:31:02.982: INFO: Waiting up to 5m0s for pod "projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2" in namespace "projected-9895" to be "success or failure"
Aug 30 17:31:02.987: INFO: Pod "projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.898382ms
Aug 30 17:31:05.042: INFO: Pod "projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05909534s
Aug 30 17:31:07.187: INFO: Pod "projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2": Phase="Running", Reason="", readiness=true. Elapsed: 4.204094925s
Aug 30 17:31:09.210: INFO: Pod "projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228035249s
STEP: Saw pod success
Aug 30 17:31:09.211: INFO: Pod "projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2" satisfied condition "success or failure"
Aug 30 17:31:09.217: INFO: Trying to get logs from node iruya-worker pod projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2 container projected-all-volume-test: 
STEP: delete the pod
Aug 30 17:31:09.250: INFO: Waiting for pod projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2 to disappear
Aug 30 17:31:09.302: INFO: Pod projected-volume-dc3f1915-04c4-404e-af23-6294afd9f1c2 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:31:09.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9895" for this suite.
Aug 30 17:31:15.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:31:15.563: INFO: namespace projected-9895 deletion completed in 6.249782447s

• [SLOW TEST:12.665 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:31:15.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:31:15.662: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:31:16.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1981" for this suite.
Aug 30 17:31:22.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:31:23.033: INFO: namespace custom-resource-definition-1981 deletion completed in 6.190321083s

• [SLOW TEST:7.465 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:31:23.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 30 17:31:23.107: INFO: Waiting up to 5m0s for pod "pod-898fda52-4232-4808-84c7-38027e30c3f0" in namespace "emptydir-6183" to be "success or failure"
Aug 30 17:31:23.141: INFO: Pod "pod-898fda52-4232-4808-84c7-38027e30c3f0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.192474ms
Aug 30 17:31:25.155: INFO: Pod "pod-898fda52-4232-4808-84c7-38027e30c3f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048581389s
Aug 30 17:31:27.162: INFO: Pod "pod-898fda52-4232-4808-84c7-38027e30c3f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.055325608s
Aug 30 17:31:29.169: INFO: Pod "pod-898fda52-4232-4808-84c7-38027e30c3f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061872527s
STEP: Saw pod success
Aug 30 17:31:29.169: INFO: Pod "pod-898fda52-4232-4808-84c7-38027e30c3f0" satisfied condition "success or failure"
Aug 30 17:31:29.173: INFO: Trying to get logs from node iruya-worker2 pod pod-898fda52-4232-4808-84c7-38027e30c3f0 container test-container: 
STEP: delete the pod
Aug 30 17:31:29.214: INFO: Waiting for pod pod-898fda52-4232-4808-84c7-38027e30c3f0 to disappear
Aug 30 17:31:29.223: INFO: Pod pod-898fda52-4232-4808-84c7-38027e30c3f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:31:29.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6183" for this suite.
Aug 30 17:31:35.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:31:35.493: INFO: namespace emptydir-6183 deletion completed in 6.262863478s

• [SLOW TEST:12.458 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:31:35.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:31:35.729: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d0a5e09d-f3bd-458a-98e7-1388a5b14375", Controller:(*bool)(0x4002e2e4ea), BlockOwnerDeletion:(*bool)(0x4002e2e4eb)}}
Aug 30 17:31:35.741: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"019465ef-9080-4736-9db0-025ac369cb57", Controller:(*bool)(0x4002e2e6da), BlockOwnerDeletion:(*bool)(0x4002e2e6db)}}
Aug 30 17:31:35.772: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a9824942-2dfe-4b7b-a6f5-0310f20c81e6", Controller:(*bool)(0x4002e2e86a), BlockOwnerDeletion:(*bool)(0x4002e2e86b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:31:40.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9005" for this suite.
Aug 30 17:31:46.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:31:47.069: INFO: namespace gc-9005 deletion completed in 6.224049609s

• [SLOW TEST:11.573 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:31:47.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 30 17:31:47.146: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:31:53.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3428" for this suite.
Aug 30 17:31:59.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:31:59.867: INFO: namespace init-container-3428 deletion completed in 6.159081036s

• [SLOW TEST:12.795 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:31:59.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-ng5h
STEP: Creating a pod to test atomic-volume-subpath
Aug 30 17:32:00.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ng5h" in namespace "subpath-383" to be "success or failure"
Aug 30 17:32:00.045: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.269045ms
Aug 30 17:32:02.053: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017923957s
Aug 30 17:32:04.060: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 4.025059661s
Aug 30 17:32:06.067: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 6.032547941s
Aug 30 17:32:08.074: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 8.039282378s
Aug 30 17:32:10.097: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 10.062555382s
Aug 30 17:32:12.104: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 12.069821196s
Aug 30 17:32:14.111: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 14.076602917s
Aug 30 17:32:16.118: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 16.083660377s
Aug 30 17:32:18.126: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 18.091636241s
Aug 30 17:32:20.132: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 20.097420206s
Aug 30 17:32:22.139: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 22.104537354s
Aug 30 17:32:24.147: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Running", Reason="", readiness=true. Elapsed: 24.11277455s
Aug 30 17:32:26.154: INFO: Pod "pod-subpath-test-downwardapi-ng5h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.119679512s
STEP: Saw pod success
Aug 30 17:32:26.155: INFO: Pod "pod-subpath-test-downwardapi-ng5h" satisfied condition "success or failure"
Aug 30 17:32:26.159: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-ng5h container test-container-subpath-downwardapi-ng5h: 
STEP: delete the pod
Aug 30 17:32:26.180: INFO: Waiting for pod pod-subpath-test-downwardapi-ng5h to disappear
Aug 30 17:32:26.186: INFO: Pod pod-subpath-test-downwardapi-ng5h no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-ng5h
Aug 30 17:32:26.186: INFO: Deleting pod "pod-subpath-test-downwardapi-ng5h" in namespace "subpath-383"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:32:26.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-383" for this suite.
Aug 30 17:32:32.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:32:32.394: INFO: namespace subpath-383 deletion completed in 6.169685074s

• [SLOW TEST:32.527 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:32:32.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-fx54
STEP: Creating a pod to test atomic-volume-subpath
Aug 30 17:32:32.497: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fx54" in namespace "subpath-6217" to be "success or failure"
Aug 30 17:32:32.515: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Pending", Reason="", readiness=false. Elapsed: 17.667934ms
Aug 30 17:32:34.521: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02384728s
Aug 30 17:32:36.528: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 4.030902153s
Aug 30 17:32:38.535: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 6.038070907s
Aug 30 17:32:40.542: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 8.044660768s
Aug 30 17:32:42.549: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 10.05164362s
Aug 30 17:32:44.555: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 12.058429183s
Aug 30 17:32:46.563: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 14.065679553s
Aug 30 17:32:48.569: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 16.07173927s
Aug 30 17:32:50.575: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 18.078502735s
Aug 30 17:32:52.583: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 20.085751976s
Aug 30 17:32:54.590: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 22.09303418s
Aug 30 17:32:56.598: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Running", Reason="", readiness=true. Elapsed: 24.100536688s
Aug 30 17:32:58.604: INFO: Pod "pod-subpath-test-projected-fx54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.107190079s
STEP: Saw pod success
Aug 30 17:32:58.605: INFO: Pod "pod-subpath-test-projected-fx54" satisfied condition "success or failure"
Aug 30 17:32:58.609: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-fx54 container test-container-subpath-projected-fx54: 
STEP: delete the pod
Aug 30 17:32:58.649: INFO: Waiting for pod pod-subpath-test-projected-fx54 to disappear
Aug 30 17:32:58.656: INFO: Pod pod-subpath-test-projected-fx54 no longer exists
STEP: Deleting pod pod-subpath-test-projected-fx54
Aug 30 17:32:58.656: INFO: Deleting pod "pod-subpath-test-projected-fx54" in namespace "subpath-6217"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:32:58.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6217" for this suite.
Aug 30 17:33:04.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:33:04.830: INFO: namespace subpath-6217 deletion completed in 6.163027458s

• [SLOW TEST:32.434 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:33:04.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-08baa57c-3a78-4440-a15c-6f501dc2cbe5
STEP: Creating a pod to test consume configMaps
Aug 30 17:33:04.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c" in namespace "projected-8665" to be "success or failure"
Aug 30 17:33:04.959: INFO: Pod "pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.648166ms
Aug 30 17:33:06.965: INFO: Pod "pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035961107s
Aug 30 17:33:08.971: INFO: Pod "pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042013201s
STEP: Saw pod success
Aug 30 17:33:08.971: INFO: Pod "pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c" satisfied condition "success or failure"
Aug 30 17:33:08.976: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c container projected-configmap-volume-test: 
STEP: delete the pod
Aug 30 17:33:09.000: INFO: Waiting for pod pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c to disappear
Aug 30 17:33:09.011: INFO: Pod pod-projected-configmaps-6101a049-b7c4-4304-b981-c9ba09a8d19c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:33:09.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8665" for this suite.
Aug 30 17:33:15.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:33:15.181: INFO: namespace projected-8665 deletion completed in 6.161578936s

• [SLOW TEST:10.350 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:33:15.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6044/secret-test-d7e128a5-00e1-47c7-bc9d-cd79aa802a1a
STEP: Creating a pod to test consume secrets
Aug 30 17:33:15.338: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0" in namespace "secrets-6044" to be "success or failure"
Aug 30 17:33:15.346: INFO: Pod "pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.840488ms
Aug 30 17:33:17.577: INFO: Pod "pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238843943s
Aug 30 17:33:19.584: INFO: Pod "pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245240559s
STEP: Saw pod success
Aug 30 17:33:19.584: INFO: Pod "pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0" satisfied condition "success or failure"
Aug 30 17:33:19.591: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0 container env-test: 
STEP: delete the pod
Aug 30 17:33:19.674: INFO: Waiting for pod pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0 to disappear
Aug 30 17:33:19.687: INFO: Pod pod-configmaps-3a008f90-9a99-4f6b-bfbd-4363da73dce0 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:33:19.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6044" for this suite.
Aug 30 17:33:25.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:33:25.843: INFO: namespace secrets-6044 deletion completed in 6.149562472s

• [SLOW TEST:10.658 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:33:25.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug 30 17:33:25.930: INFO: Waiting up to 5m0s for pod "client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8" in namespace "containers-8708" to be "success or failure"
Aug 30 17:33:25.934: INFO: Pod "client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.475915ms
Aug 30 17:33:28.267: INFO: Pod "client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336743205s
Aug 30 17:33:30.274: INFO: Pod "client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8": Phase="Running", Reason="", readiness=true. Elapsed: 4.34358935s
Aug 30 17:33:32.284: INFO: Pod "client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.353464998s
STEP: Saw pod success
Aug 30 17:33:32.284: INFO: Pod "client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8" satisfied condition "success or failure"
Aug 30 17:33:32.299: INFO: Trying to get logs from node iruya-worker2 pod client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8 container test-container: 
STEP: delete the pod
Aug 30 17:33:32.367: INFO: Waiting for pod client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8 to disappear
Aug 30 17:33:32.372: INFO: Pod client-containers-e869a53c-6694-4d56-b8e8-5a5cf7dfadf8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:33:32.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8708" for this suite.
Aug 30 17:33:38.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:33:38.561: INFO: namespace containers-8708 deletion completed in 6.181058849s

• [SLOW TEST:12.717 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:33:38.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 30 17:33:42.743: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 30 17:33:48.979: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:33:48.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1050" for this suite.
Aug 30 17:33:55.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:33:55.151: INFO: namespace pods-1050 deletion completed in 6.155030073s

• [SLOW TEST:16.589 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:33:55.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 30 17:33:55.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f" in namespace "projected-6062" to be "success or failure"
Aug 30 17:33:55.279: INFO: Pod "downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.126601ms
Aug 30 17:33:57.330: INFO: Pod "downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074600727s
Aug 30 17:33:59.338: INFO: Pod "downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f": Phase="Running", Reason="", readiness=true. Elapsed: 4.082221234s
Aug 30 17:34:01.344: INFO: Pod "downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088492913s
STEP: Saw pod success
Aug 30 17:34:01.344: INFO: Pod "downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f" satisfied condition "success or failure"
Aug 30 17:34:01.348: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f container client-container: 
STEP: delete the pod
Aug 30 17:34:01.391: INFO: Waiting for pod downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f to disappear
Aug 30 17:34:01.395: INFO: Pod downwardapi-volume-26bda2fd-cda5-4cd4-94b3-167a601c4a5f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:34:01.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6062" for this suite.
Aug 30 17:34:07.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:34:07.618: INFO: namespace projected-6062 deletion completed in 6.173779447s

• [SLOW TEST:12.466 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:34:07.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 30 17:34:15.790: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:15.861: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:17.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:17.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:19.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:19.868: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:21.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:21.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:23.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:23.870: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:25.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:25.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:27.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:27.870: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:29.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:29.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:31.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:31.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:33.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:33.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:35.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:36.352: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:37.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:37.867: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:39.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:39.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:41.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:41.869: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 30 17:34:43.862: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 30 17:34:43.868: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:34:43.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1500" for this suite.
Aug 30 17:35:05.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:35:06.030: INFO: namespace container-lifecycle-hook-1500 deletion completed in 22.151382022s

• [SLOW TEST:58.411 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:35:06.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6e23e0b9-2a1d-4550-bd2f-56271881497a
STEP: Creating a pod to test consume configMaps
Aug 30 17:35:06.141: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e" in namespace "configmap-7534" to be "success or failure"
Aug 30 17:35:06.159: INFO: Pod "pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.927033ms
Aug 30 17:35:08.166: INFO: Pod "pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024928393s
Aug 30 17:35:10.172: INFO: Pod "pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031733229s
STEP: Saw pod success
Aug 30 17:35:10.173: INFO: Pod "pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e" satisfied condition "success or failure"
Aug 30 17:35:10.177: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e container configmap-volume-test: 
STEP: delete the pod
Aug 30 17:35:10.363: INFO: Waiting for pod pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e to disappear
Aug 30 17:35:10.383: INFO: Pod pod-configmaps-f3f2e54c-fab8-4b66-b8f9-db3b5dd7fb8e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:35:10.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7534" for this suite.
Aug 30 17:35:16.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:35:16.542: INFO: namespace configmap-7534 deletion completed in 6.151612197s

• [SLOW TEST:10.511 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:35:16.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 30 17:35:16.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2522'
Aug 30 17:35:17.925: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 30 17:35:17.925: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Aug 30 17:35:17.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2522'
Aug 30 17:35:19.250: INFO: stderr: ""
Aug 30 17:35:19.251: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:35:19.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2522" for this suite.
Aug 30 17:35:25.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:35:25.406: INFO: namespace kubectl-2522 deletion completed in 6.148379179s

• [SLOW TEST:8.863 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:35:25.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 30 17:35:33.564: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 30 17:35:33.571: INFO: Pod pod-with-prestop-http-hook still exists
Aug 30 17:35:35.571: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 30 17:35:35.603: INFO: Pod pod-with-prestop-http-hook still exists
Aug 30 17:35:37.571: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 30 17:35:37.578: INFO: Pod pod-with-prestop-http-hook still exists
Aug 30 17:35:39.571: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 30 17:35:39.579: INFO: Pod pod-with-prestop-http-hook still exists
Aug 30 17:35:41.571: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 30 17:35:41.578: INFO: Pod pod-with-prestop-http-hook still exists
Aug 30 17:35:43.571: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 30 17:35:43.577: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:35:43.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7297" for this suite.
Aug 30 17:36:05.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:36:05.751: INFO: namespace container-lifecycle-hook-7297 deletion completed in 22.155238748s

• [SLOW TEST:40.343 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:36:05.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:36:11.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5720" for this suite.
Aug 30 17:36:57.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:36:58.120: INFO: namespace kubelet-test-5720 deletion completed in 46.169833174s

• [SLOW TEST:52.368 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:36:58.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Aug 30 17:36:58.223: INFO: Waiting up to 5m0s for pod "client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62" in namespace "containers-4605" to be "success or failure"
Aug 30 17:36:58.232: INFO: Pod "client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.980968ms
Aug 30 17:37:00.239: INFO: Pod "client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015422361s
Aug 30 17:37:02.246: INFO: Pod "client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62": Phase="Running", Reason="", readiness=true. Elapsed: 4.022630137s
Aug 30 17:37:04.253: INFO: Pod "client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030349253s
STEP: Saw pod success
Aug 30 17:37:04.254: INFO: Pod "client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62" satisfied condition "success or failure"
Aug 30 17:37:04.257: INFO: Trying to get logs from node iruya-worker pod client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62 container test-container: 
STEP: delete the pod
Aug 30 17:37:04.311: INFO: Waiting for pod client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62 to disappear
Aug 30 17:37:04.370: INFO: Pod client-containers-ff060446-a8e1-4f51-904a-28b6849c1d62 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:37:04.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4605" for this suite.
Aug 30 17:37:10.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:37:10.586: INFO: namespace containers-4605 deletion completed in 6.207800597s

• [SLOW TEST:12.464 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:37:10.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:37:10.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:37:14.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9093" for this suite.
Aug 30 17:37:52.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:37:53.023: INFO: namespace pods-9093 deletion completed in 38.177262129s

• [SLOW TEST:42.434 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:37:53.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:37:53.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4132" for this suite.
Aug 30 17:38:15.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:38:15.444: INFO: namespace kubelet-test-4132 deletion completed in 22.217049678s

• [SLOW TEST:22.420 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:38:15.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-6e60fa3c-099c-48d8-b1d2-bfc74ad4474a
STEP: Creating configMap with name cm-test-opt-upd-c538b882-4ac3-4302-8cfd-cded0be05aa7
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6e60fa3c-099c-48d8-b1d2-bfc74ad4474a
STEP: Updating configmap cm-test-opt-upd-c538b882-4ac3-4302-8cfd-cded0be05aa7
STEP: Creating configMap with name cm-test-opt-create-e0336810-8fbe-438a-8de3-edcec4450ca5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:38:29.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7457" for this suite.
Aug 30 17:38:53.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:38:53.926: INFO: namespace configmap-7457 deletion completed in 24.152750119s

• [SLOW TEST:38.481 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:38:53.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 30 17:38:54.001: INFO: Waiting up to 5m0s for pod "downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9" in namespace "downward-api-8863" to be "success or failure"
Aug 30 17:38:54.006: INFO: Pod "downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.621556ms
Aug 30 17:38:56.043: INFO: Pod "downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042266356s
Aug 30 17:38:58.052: INFO: Pod "downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9": Phase="Running", Reason="", readiness=true. Elapsed: 4.050816038s
Aug 30 17:39:00.059: INFO: Pod "downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058416741s
STEP: Saw pod success
Aug 30 17:39:00.060: INFO: Pod "downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9" satisfied condition "success or failure"
Aug 30 17:39:00.065: INFO: Trying to get logs from node iruya-worker pod downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9 container dapi-container: 
STEP: delete the pod
Aug 30 17:39:00.097: INFO: Waiting for pod downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9 to disappear
Aug 30 17:39:00.101: INFO: Pod downward-api-56684017-1bf8-46e1-8beb-681b2ab72cd9 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:39:00.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8863" for this suite.
Aug 30 17:39:06.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:39:06.258: INFO: namespace downward-api-8863 deletion completed in 6.148751413s

• [SLOW TEST:12.328 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:39:06.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:39:14.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1110" for this suite.
Aug 30 17:39:20.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:39:20.569: INFO: namespace kubelet-test-1110 deletion completed in 6.156813447s

• [SLOW TEST:14.310 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:39:20.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-a9e15e54-72c8-4776-8647-1716e3d86341 in namespace container-probe-7463
Aug 30 17:39:26.665: INFO: Started pod busybox-a9e15e54-72c8-4776-8647-1716e3d86341 in namespace container-probe-7463
STEP: checking the pod's current state and verifying that restartCount is present
Aug 30 17:39:26.670: INFO: Initial restart count of pod busybox-a9e15e54-72c8-4776-8647-1716e3d86341 is 0
Aug 30 17:40:16.834: INFO: Restart count of pod container-probe-7463/busybox-a9e15e54-72c8-4776-8647-1716e3d86341 is now 1 (50.163932442s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:40:16.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7463" for this suite.
Aug 30 17:40:22.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:40:23.040: INFO: namespace container-probe-7463 deletion completed in 6.162575211s

• [SLOW TEST:62.468 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:40:23.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6794
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 30 17:40:23.142: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 30 17:40:49.563: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostName&protocol=udp&host=10.244.2.241&port=8081&tries=1'] Namespace:pod-network-test-6794 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 30 17:40:49.564: INFO: >>> kubeConfig: /root/.kube/config
I0830 17:40:49.654812       7 log.go:172] (0x400078c630) (0x4002463680) Create stream
I0830 17:40:49.657251       7 log.go:172] (0x400078c630) (0x4002463680) Stream added, broadcasting: 1
I0830 17:40:49.677633       7 log.go:172] (0x400078c630) Reply frame received for 1
I0830 17:40:49.678162       7 log.go:172] (0x400078c630) (0x40003201e0) Create stream
I0830 17:40:49.678228       7 log.go:172] (0x400078c630) (0x40003201e0) Stream added, broadcasting: 3
I0830 17:40:49.679919       7 log.go:172] (0x400078c630) Reply frame received for 3
I0830 17:40:49.680222       7 log.go:172] (0x400078c630) (0x4003647360) Create stream
I0830 17:40:49.680296       7 log.go:172] (0x400078c630) (0x4003647360) Stream added, broadcasting: 5
I0830 17:40:49.681936       7 log.go:172] (0x400078c630) Reply frame received for 5
I0830 17:40:49.779072       7 log.go:172] (0x400078c630) Data frame received for 3
I0830 17:40:49.779333       7 log.go:172] (0x400078c630) Data frame received for 5
I0830 17:40:49.779480       7 log.go:172] (0x4003647360) (5) Data frame handling
I0830 17:40:49.779805       7 log.go:172] (0x40003201e0) (3) Data frame handling
I0830 17:40:49.780678       7 log.go:172] (0x400078c630) Data frame received for 1
I0830 17:40:49.780970       7 log.go:172] (0x4002463680) (1) Data frame handling
I0830 17:40:49.781141       7 log.go:172] (0x4002463680) (1) Data frame sent
I0830 17:40:49.781359       7 log.go:172] (0x40003201e0) (3) Data frame sent
I0830 17:40:49.781932       7 log.go:172] (0x400078c630) Data frame received for 3
I0830 17:40:49.782037       7 log.go:172] (0x40003201e0) (3) Data frame handling
I0830 17:40:49.782941       7 log.go:172] (0x400078c630) (0x4002463680) Stream removed, broadcasting: 1
I0830 17:40:49.785294       7 log.go:172] (0x400078c630) Go away received
I0830 17:40:49.787278       7 log.go:172] (0x400078c630) (0x4002463680) Stream removed, broadcasting: 1
I0830 17:40:49.787896       7 log.go:172] (0x400078c630) (0x40003201e0) Stream removed, broadcasting: 3
I0830 17:40:49.788182       7 log.go:172] (0x400078c630) (0x4003647360) Stream removed, broadcasting: 5
Aug 30 17:40:49.790: INFO: Waiting for endpoints: map[]
Aug 30 17:40:49.796: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostName&protocol=udp&host=10.244.1.190&port=8081&tries=1'] Namespace:pod-network-test-6794 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 30 17:40:49.796: INFO: >>> kubeConfig: /root/.kube/config
I0830 17:40:49.847400       7 log.go:172] (0x400157a210) (0x4002df5680) Create stream
I0830 17:40:49.847564       7 log.go:172] (0x400157a210) (0x4002df5680) Stream added, broadcasting: 1
I0830 17:40:49.854412       7 log.go:172] (0x400157a210) Reply frame received for 1
I0830 17:40:49.854671       7 log.go:172] (0x400157a210) (0x4002bf2640) Create stream
I0830 17:40:49.854774       7 log.go:172] (0x400157a210) (0x4002bf2640) Stream added, broadcasting: 3
I0830 17:40:49.858763       7 log.go:172] (0x400157a210) Reply frame received for 3
I0830 17:40:49.858934       7 log.go:172] (0x400157a210) (0x4002bf26e0) Create stream
I0830 17:40:49.858999       7 log.go:172] (0x400157a210) (0x4002bf26e0) Stream added, broadcasting: 5
I0830 17:40:49.861232       7 log.go:172] (0x400157a210) Reply frame received for 5
I0830 17:40:49.934286       7 log.go:172] (0x400157a210) Data frame received for 3
I0830 17:40:49.934471       7 log.go:172] (0x4002bf2640) (3) Data frame handling
I0830 17:40:49.934626       7 log.go:172] (0x4002bf2640) (3) Data frame sent
I0830 17:40:49.934886       7 log.go:172] (0x400157a210) Data frame received for 5
I0830 17:40:49.935009       7 log.go:172] (0x4002bf26e0) (5) Data frame handling
I0830 17:40:49.935155       7 log.go:172] (0x400157a210) Data frame received for 3
I0830 17:40:49.935255       7 log.go:172] (0x4002bf2640) (3) Data frame handling
I0830 17:40:49.937394       7 log.go:172] (0x400157a210) Data frame received for 1
I0830 17:40:49.937496       7 log.go:172] (0x4002df5680) (1) Data frame handling
I0830 17:40:49.937597       7 log.go:172] (0x4002df5680) (1) Data frame sent
I0830 17:40:49.937701       7 log.go:172] (0x400157a210) (0x4002df5680) Stream removed, broadcasting: 1
I0830 17:40:49.937829       7 log.go:172] (0x400157a210) Go away received
I0830 17:40:49.938168       7 log.go:172] (0x400157a210) (0x4002df5680) Stream removed, broadcasting: 1
I0830 17:40:49.938309       7 log.go:172] (0x400157a210) (0x4002bf2640) Stream removed, broadcasting: 3
I0830 17:40:49.938435       7 log.go:172] (0x400157a210) (0x4002bf26e0) Stream removed, broadcasting: 5
Aug 30 17:40:49.938: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:40:49.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6794" for this suite.
Aug 30 17:41:11.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:41:12.103: INFO: namespace pod-network-test-6794 deletion completed in 22.154543828s

• [SLOW TEST:49.062 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:41:12.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:41:18.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3557" for this suite.
Aug 30 17:41:24.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:41:24.532: INFO: namespace namespaces-3557 deletion completed in 6.15157341s
STEP: Destroying namespace "nsdeletetest-7209" for this suite.
Aug 30 17:41:24.536: INFO: Namespace nsdeletetest-7209 was already deleted
STEP: Destroying namespace "nsdeletetest-7355" for this suite.
Aug 30 17:41:30.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:41:30.725: INFO: namespace nsdeletetest-7355 deletion completed in 6.188855524s

• [SLOW TEST:18.620 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:41:30.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 30 17:41:30.793: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 30 17:41:30.847: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 30 17:41:35.854: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 30 17:41:35.856: INFO: Creating deployment "test-rolling-update-deployment"
Aug 30 17:41:35.863: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 30 17:41:35.938: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 30 17:41:38.186: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 30 17:41:38.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:41:40.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734406095, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 30 17:41:42.381: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 30 17:41:42.399: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-120,SelfLink:/apis/apps/v1/namespaces/deployment-120/deployments/test-rolling-update-deployment,UID:0323dad1-c565-4bd1-b416-96049f9d5cfa,ResourceVersion:4069974,Generation:1,CreationTimestamp:2020-08-30 17:41:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-30 17:41:35 +0000 UTC 2020-08-30 17:41:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-30 17:41:40 +0000 UTC 2020-08-30 17:41:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 30 17:41:42.406: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-120,SelfLink:/apis/apps/v1/namespaces/deployment-120/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:c9292571-e72c-494d-a912-4dc25689a4db,ResourceVersion:4069963,Generation:1,CreationTimestamp:2020-08-30 17:41:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0323dad1-c565-4bd1-b416-96049f9d5cfa 0x4001f130f7 0x4001f130f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 30 17:41:42.406: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 30 17:41:42.407: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-120,SelfLink:/apis/apps/v1/namespaces/deployment-120/replicasets/test-rolling-update-controller,UID:1683938c-c523-4596-8e69-5f6010cdcbe6,ResourceVersion:4069972,Generation:2,CreationTimestamp:2020-08-30 17:41:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0323dad1-c565-4bd1-b416-96049f9d5cfa 0x4001f13027 0x4001f13028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 30 17:41:42.414: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-5cj5s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-5cj5s,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-120,SelfLink:/api/v1/namespaces/deployment-120/pods/test-rolling-update-deployment-79f6b9d75c-5cj5s,UID:ed2ffff3-e8eb-4a0a-8c14-d26f5a138630,ResourceVersion:4069962,Generation:0,CreationTimestamp:2020-08-30 17:41:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c c9292571-e72c-494d-a912-4dc25689a4db 0x4003157ea7 0x4003157ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xxzh9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xxzh9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xxzh9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003157f20} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003157f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:41:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:41:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 17:41:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.191,StartTime:2020-08-30 17:41:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-30 17:41:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f79504e11c6569c7d7c9e65575a63da98f9a268a3d2fd895e5842daea29fff79}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:41:42.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-120" for this suite.
Aug 30 17:41:50.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:41:50.609: INFO: namespace deployment-120 deletion completed in 8.187538593s

• [SLOW TEST:19.883 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:41:50.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Aug 30 17:41:50.708: INFO: Waiting up to 5m0s for pod "client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb" in namespace "containers-5301" to be "success or failure"
Aug 30 17:41:50.714: INFO: Pod "client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.731801ms
Aug 30 17:41:52.721: INFO: Pod "client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012808048s
Aug 30 17:41:54.726: INFO: Pod "client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018506027s
STEP: Saw pod success
Aug 30 17:41:54.727: INFO: Pod "client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb" satisfied condition "success or failure"
Aug 30 17:41:54.730: INFO: Trying to get logs from node iruya-worker2 pod client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb container test-container: 
STEP: delete the pod
Aug 30 17:41:54.777: INFO: Waiting for pod client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb to disappear
Aug 30 17:41:54.788: INFO: Pod client-containers-9261c854-5fa7-450e-940c-5fe7b8696bfb no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:41:54.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5301" for this suite.
Aug 30 17:42:00.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:42:01.001: INFO: namespace containers-5301 deletion completed in 6.205815149s

• [SLOW TEST:10.391 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:42:01.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1705
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1705
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1705
Aug 30 17:42:01.290: INFO: Found 0 stateful pods, waiting for 1
Aug 30 17:42:11.298: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 30 17:42:11.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 30 17:42:16.461: INFO: stderr: "I0830 17:42:16.320925    2222 log.go:172] (0x4000b38420) (0x40008cc6e0) Create stream\nI0830 17:42:16.326562    2222 log.go:172] (0x4000b38420) (0x40008cc6e0) Stream added, broadcasting: 1\nI0830 17:42:16.342093    2222 log.go:172] (0x4000b38420) Reply frame received for 1\nI0830 17:42:16.342787    2222 log.go:172] (0x4000b38420) (0x4000672280) Create stream\nI0830 17:42:16.342912    2222 log.go:172] (0x4000b38420) (0x4000672280) Stream added, broadcasting: 3\nI0830 17:42:16.345430    2222 log.go:172] (0x4000b38420) Reply frame received for 3\nI0830 17:42:16.345673    2222 log.go:172] (0x4000b38420) (0x40008cc780) Create stream\nI0830 17:42:16.345727    2222 log.go:172] (0x4000b38420) (0x40008cc780) Stream added, broadcasting: 5\nI0830 17:42:16.346679    2222 log.go:172] (0x4000b38420) Reply frame received for 5\nI0830 17:42:16.390129    2222 log.go:172] (0x4000b38420) Data frame received for 5\nI0830 17:42:16.390393    2222 log.go:172] (0x40008cc780) (5) Data frame handling\nI0830 17:42:16.391181    2222 log.go:172] (0x40008cc780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 17:42:16.433158    2222 log.go:172] (0x4000b38420) Data frame received for 3\nI0830 17:42:16.433338    2222 log.go:172] (0x4000672280) (3) Data frame handling\nI0830 17:42:16.433430    2222 log.go:172] (0x4000672280) (3) Data frame sent\nI0830 17:42:16.433527    2222 log.go:172] (0x4000b38420) Data frame received for 3\nI0830 17:42:16.433615    2222 log.go:172] (0x4000672280) (3) Data frame handling\nI0830 17:42:16.433723    2222 log.go:172] (0x4000b38420) Data frame received for 5\nI0830 17:42:16.433842    2222 log.go:172] (0x40008cc780) (5) Data frame handling\nI0830 17:42:16.434981    2222 log.go:172] (0x4000b38420) Data frame received for 1\nI0830 17:42:16.435140    2222 log.go:172] (0x40008cc6e0) (1) Data frame handling\nI0830 17:42:16.435265    2222 log.go:172] (0x40008cc6e0) (1) Data frame sent\nI0830 17:42:16.437502    2222 log.go:172] (0x4000b38420) (0x40008cc6e0) Stream removed, broadcasting: 1\nI0830 17:42:16.439756    2222 log.go:172] (0x4000b38420) Go away received\nI0830 17:42:16.449787    2222 log.go:172] (0x4000b38420) (0x40008cc6e0) Stream removed, broadcasting: 1\nI0830 17:42:16.450126    2222 log.go:172] (0x4000b38420) (0x4000672280) Stream removed, broadcasting: 3\nI0830 17:42:16.450358    2222 log.go:172] (0x4000b38420) (0x40008cc780) Stream removed, broadcasting: 5\n"
Aug 30 17:42:16.462: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 30 17:42:16.462: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 30 17:42:16.468: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 30 17:42:26.475: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 30 17:42:26.475: INFO: Waiting for statefulset status.replicas updated to 0
Aug 30 17:42:26.499: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995521s
Aug 30 17:42:27.508: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988630418s
Aug 30 17:42:28.516: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98016553s
Aug 30 17:42:29.525: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971352898s
Aug 30 17:42:30.532: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.96302744s
Aug 30 17:42:31.539: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.955722403s
Aug 30 17:42:32.545: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.949100946s
Aug 30 17:42:33.552: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.942930626s
Aug 30 17:42:34.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.935493398s
Aug 30 17:42:35.567: INFO: Verifying statefulset ss doesn't scale past 1 for another 928.960904ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1705
Aug 30 17:42:36.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:42:38.102: INFO: stderr: "I0830 17:42:37.983366    2259 log.go:172] (0x40005c2790) (0x40009508c0) Create stream\nI0830 17:42:37.989594    2259 log.go:172] (0x40005c2790) (0x40009508c0) Stream added, broadcasting: 1\nI0830 17:42:38.003837    2259 log.go:172] (0x40005c2790) Reply frame received for 1\nI0830 17:42:38.004426    2259 log.go:172] (0x40005c2790) (0x4000950000) Create stream\nI0830 17:42:38.004500    2259 log.go:172] (0x40005c2790) (0x4000950000) Stream added, broadcasting: 3\nI0830 17:42:38.006049    2259 log.go:172] (0x40005c2790) Reply frame received for 3\nI0830 17:42:38.006379    2259 log.go:172] (0x40005c2790) (0x40009500a0) Create stream\nI0830 17:42:38.006445    2259 log.go:172] (0x40005c2790) (0x40009500a0) Stream added, broadcasting: 5\nI0830 17:42:38.007574    2259 log.go:172] (0x40005c2790) Reply frame received for 5\nI0830 17:42:38.078726    2259 log.go:172] (0x40005c2790) Data frame received for 5\nI0830 17:42:38.078993    2259 log.go:172] (0x40005c2790) Data frame received for 3\nI0830 17:42:38.079363    2259 log.go:172] (0x40009500a0) (5) Data frame handling\nI0830 17:42:38.080261    2259 log.go:172] (0x40009500a0) (5) Data frame sent\nI0830 17:42:38.080563    2259 log.go:172] (0x4000950000) (3) Data frame handling\nI0830 17:42:38.080904    2259 log.go:172] (0x4000950000) (3) Data frame sent\nI0830 17:42:38.081113    2259 log.go:172] (0x40005c2790) Data frame received for 3\nI0830 17:42:38.081286    2259 log.go:172] (0x4000950000) (3) Data frame handling\nI0830 17:42:38.081681    2259 log.go:172] (0x40005c2790) Data frame received for 1\nI0830 17:42:38.081795    2259 log.go:172] (0x40009508c0) (1) Data frame handling\nI0830 17:42:38.081905    2259 log.go:172] (0x40009508c0) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0830 17:42:38.082188    2259 log.go:172] (0x40005c2790) Data frame received for 5\nI0830 17:42:38.082277    2259 log.go:172] (0x40009500a0) (5) Data frame handling\nI0830 17:42:38.083971    2259 log.go:172] (0x40005c2790) (0x40009508c0) Stream removed, broadcasting: 1\nI0830 17:42:38.085710    2259 log.go:172] (0x40005c2790) Go away received\nI0830 17:42:38.088529    2259 log.go:172] (0x40005c2790) (0x40009508c0) Stream removed, broadcasting: 1\nI0830 17:42:38.088883    2259 log.go:172] (0x40005c2790) (0x4000950000) Stream removed, broadcasting: 3\nI0830 17:42:38.089103    2259 log.go:172] (0x40005c2790) (0x40009500a0) Stream removed, broadcasting: 5\n"
Aug 30 17:42:38.102: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 30 17:42:38.102: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 30 17:42:38.108: INFO: Found 1 stateful pods, waiting for 3
Aug 30 17:42:48.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 30 17:42:48.118: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 30 17:42:48.118: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 30 17:42:48.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 30 17:42:49.606: INFO: stderr: "I0830 17:42:49.494581    2283 log.go:172] (0x40004246e0) (0x400089a820) Create stream\nI0830 17:42:49.498891    2283 log.go:172] (0x40004246e0) (0x400089a820) Stream added, broadcasting: 1\nI0830 17:42:49.512288    2283 log.go:172] (0x40004246e0) Reply frame received for 1\nI0830 17:42:49.513045    2283 log.go:172] (0x40004246e0) (0x400089a000) Create stream\nI0830 17:42:49.513115    2283 log.go:172] (0x40004246e0) (0x400089a000) Stream added, broadcasting: 3\nI0830 17:42:49.514511    2283 log.go:172] (0x40004246e0) Reply frame received for 3\nI0830 17:42:49.514783    2283 log.go:172] (0x40004246e0) (0x4000962000) Create stream\nI0830 17:42:49.514847    2283 log.go:172] (0x40004246e0) (0x4000962000) Stream added, broadcasting: 5\nI0830 17:42:49.515850    2283 log.go:172] (0x40004246e0) Reply frame received for 5\nI0830 17:42:49.587122    2283 log.go:172] (0x40004246e0) Data frame received for 3\nI0830 17:42:49.587361    2283 log.go:172] (0x40004246e0) Data frame received for 5\nI0830 17:42:49.587493    2283 log.go:172] (0x4000962000) (5) Data frame handling\nI0830 17:42:49.587663    2283 log.go:172] (0x400089a000) (3) Data frame handling\nI0830 17:42:49.587920    2283 log.go:172] (0x40004246e0) Data frame received for 1\nI0830 17:42:49.588003    2283 log.go:172] (0x400089a820) (1) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 17:42:49.589411    2283 log.go:172] (0x400089a000) (3) Data frame sent\nI0830 17:42:49.589504    2283 log.go:172] (0x400089a820) (1) Data frame sent\nI0830 17:42:49.589595    2283 log.go:172] (0x4000962000) (5) Data frame sent\nI0830 17:42:49.589668    2283 log.go:172] (0x40004246e0) Data frame received for 5\nI0830 17:42:49.589729    2283 log.go:172] (0x4000962000) (5) Data frame handling\nI0830 17:42:49.589799    2283 log.go:172] (0x40004246e0) Data frame received for 3\nI0830 17:42:49.589863    2283 log.go:172] (0x400089a000) (3) Data frame handling\nI0830 17:42:49.589953    2283 log.go:172] (0x40004246e0) (0x400089a820) Stream removed, broadcasting: 1\nI0830 17:42:49.593441    2283 log.go:172] (0x40004246e0) Go away received\nI0830 17:42:49.594889    2283 log.go:172] (0x40004246e0) (0x400089a820) Stream removed, broadcasting: 1\nI0830 17:42:49.595335    2283 log.go:172] (0x40004246e0) (0x400089a000) Stream removed, broadcasting: 3\nI0830 17:42:49.595775    2283 log.go:172] (0x40004246e0) (0x4000962000) Stream removed, broadcasting: 5\n"
Aug 30 17:42:49.607: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 30 17:42:49.607: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 30 17:42:49.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 30 17:42:51.253: INFO: stderr: "I0830 17:42:51.080521    2306 log.go:172] (0x400068e420) (0x400093cb40) Create stream\nI0830 17:42:51.087509    2306 log.go:172] (0x400068e420) (0x400093cb40) Stream added, broadcasting: 1\nI0830 17:42:51.103816    2306 log.go:172] (0x400068e420) Reply frame received for 1\nI0830 17:42:51.104549    2306 log.go:172] (0x400068e420) (0x4000930000) Create stream\nI0830 17:42:51.104632    2306 log.go:172] (0x400068e420) (0x4000930000) Stream added, broadcasting: 3\nI0830 17:42:51.106543    2306 log.go:172] (0x400068e420) Reply frame received for 3\nI0830 17:42:51.106912    2306 log.go:172] (0x400068e420) (0x400093c000) Create stream\nI0830 17:42:51.106984    2306 log.go:172] (0x400068e420) (0x400093c000) Stream added, broadcasting: 5\nI0830 17:42:51.108181    2306 log.go:172] (0x400068e420) Reply frame received for 5\nI0830 17:42:51.179986    2306 log.go:172] (0x400068e420) Data frame received for 5\nI0830 17:42:51.180269    2306 log.go:172] (0x400093c000) (5) Data frame handling\nI0830 17:42:51.181057    2306 log.go:172] (0x400093c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 17:42:51.225694    2306 log.go:172] (0x400068e420) Data frame received for 3\nI0830 17:42:51.225899    2306 log.go:172] (0x4000930000) (3) Data frame handling\nI0830 17:42:51.226098    2306 log.go:172] (0x400068e420) Data frame received for 5\nI0830 17:42:51.226251    2306 log.go:172] (0x400093c000) (5) Data frame handling\nI0830 17:42:51.226574    2306 log.go:172] (0x4000930000) (3) Data frame sent\nI0830 17:42:51.226770    2306 log.go:172] (0x400068e420) Data frame received for 3\nI0830 17:42:51.226903    2306 log.go:172] (0x4000930000) (3) Data frame handling\nI0830 17:42:51.227580    2306 log.go:172] (0x400068e420) Data frame received for 1\nI0830 17:42:51.227670    2306 log.go:172] (0x400093cb40) (1) Data frame handling\nI0830 17:42:51.227758    2306 log.go:172] (0x400093cb40) (1) Data frame sent\nI0830 17:42:51.228924    2306 log.go:172] (0x400068e420) (0x400093cb40) Stream removed, broadcasting: 1\nI0830 17:42:51.233573    2306 log.go:172] (0x400068e420) Go away received\nI0830 17:42:51.236076    2306 log.go:172] (0x400068e420) (0x400093cb40) Stream removed, broadcasting: 1\nI0830 17:42:51.236685    2306 log.go:172] (0x400068e420) (0x4000930000) Stream removed, broadcasting: 3\nI0830 17:42:51.237474    2306 log.go:172] (0x400068e420) (0x400093c000) Stream removed, broadcasting: 5\n"
Aug 30 17:42:51.254: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 30 17:42:51.254: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 30 17:42:51.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 30 17:42:52.844: INFO: stderr: "I0830 17:42:52.671330    2329 log.go:172] (0x400067a0b0) (0x40008dc1e0) Create stream\nI0830 17:42:52.675090    2329 log.go:172] (0x400067a0b0) (0x40008dc1e0) Stream added, broadcasting: 1\nI0830 17:42:52.690197    2329 log.go:172] (0x400067a0b0) Reply frame received for 1\nI0830 17:42:52.690714    2329 log.go:172] (0x400067a0b0) (0x40008dc280) Create stream\nI0830 17:42:52.690767    2329 log.go:172] (0x400067a0b0) (0x40008dc280) Stream added, broadcasting: 3\nI0830 17:42:52.692101    2329 log.go:172] (0x400067a0b0) Reply frame received for 3\nI0830 17:42:52.692317    2329 log.go:172] (0x400067a0b0) (0x40008dc320) Create stream\nI0830 17:42:52.692366    2329 log.go:172] (0x400067a0b0) (0x40008dc320) Stream added, broadcasting: 5\nI0830 17:42:52.694003    2329 log.go:172] (0x400067a0b0) Reply frame received for 5\nI0830 17:42:52.778157    2329 log.go:172] (0x400067a0b0) Data frame received for 5\nI0830 17:42:52.778381    2329 log.go:172] (0x40008dc320) (5) Data frame handling\nI0830 17:42:52.778813    2329 log.go:172] (0x40008dc320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0830 17:42:52.822626    2329 log.go:172] (0x400067a0b0) Data frame received for 3\nI0830 17:42:52.822758    2329 log.go:172] (0x40008dc280) (3) Data frame handling\nI0830 17:42:52.822825    2329 log.go:172] (0x40008dc280) (3) Data frame sent\nI0830 17:42:52.822918    2329 log.go:172] (0x400067a0b0) Data frame received for 5\nI0830 17:42:52.823055    2329 log.go:172] (0x40008dc320) (5) Data frame handling\nI0830 17:42:52.823134    2329 log.go:172] (0x400067a0b0) Data frame received for 3\nI0830 17:42:52.823204    2329 log.go:172] (0x40008dc280) (3) Data frame handling\nI0830 17:42:52.824266    2329 log.go:172] (0x400067a0b0) Data frame received for 1\nI0830 17:42:52.824339    2329 log.go:172] (0x40008dc1e0) (1) Data frame handling\nI0830 17:42:52.824394    2329 log.go:172] (0x40008dc1e0) (1) Data frame sent\nI0830 17:42:52.825681    2329 log.go:172] (0x400067a0b0) (0x40008dc1e0) Stream removed, broadcasting: 1\nI0830 17:42:52.826933    2329 log.go:172] (0x400067a0b0) Go away received\nI0830 17:42:52.829425    2329 log.go:172] (0x400067a0b0) (0x40008dc1e0) Stream removed, broadcasting: 1\nI0830 17:42:52.829657    2329 log.go:172] (0x400067a0b0) (0x40008dc280) Stream removed, broadcasting: 3\nI0830 17:42:52.829798    2329 log.go:172] (0x400067a0b0) (0x40008dc320) Stream removed, broadcasting: 5\n"
Aug 30 17:42:52.844: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 30 17:42:52.844: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 30 17:42:52.845: INFO: Waiting for statefulset status.replicas updated to 0
Aug 30 17:42:52.849: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 30 17:43:02.863: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 30 17:43:02.863: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 30 17:43:02.863: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 30 17:43:02.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999995862s
Aug 30 17:43:03.890: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994129257s
Aug 30 17:43:04.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982421276s
Aug 30 17:43:05.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973755928s
Aug 30 17:43:06.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962654526s
Aug 30 17:43:07.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954145746s
Aug 30 17:43:08.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.908330228s
Aug 30 17:43:09.983: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.898976213s
Aug 30 17:43:10.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.889486808s
Aug 30 17:43:12.000: INFO: Verifying statefulset ss doesn't scale past 3 for another 881.41015ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1705
Aug 30 17:43:13.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:43:14.474: INFO: stderr: "I0830 17:43:14.358906    2351 log.go:172] (0x400010e000) (0x40009341e0) Create stream\nI0830 17:43:14.362079    2351 log.go:172] (0x400010e000) (0x40009341e0) Stream added, broadcasting: 1\nI0830 17:43:14.375077    2351 log.go:172] (0x400010e000) Reply frame received for 1\nI0830 17:43:14.376274    2351 log.go:172] (0x400010e000) (0x4000650500) Create stream\nI0830 17:43:14.376430    2351 log.go:172] (0x400010e000) (0x4000650500) Stream added, broadcasting: 3\nI0830 17:43:14.378640    2351 log.go:172] (0x400010e000) Reply frame received for 3\nI0830 17:43:14.379054    2351 log.go:172] (0x400010e000) (0x4000934280) Create stream\nI0830 17:43:14.379144    2351 log.go:172] (0x400010e000) (0x4000934280) Stream added, broadcasting: 5\nI0830 17:43:14.380576    2351 log.go:172] (0x400010e000) Reply frame received for 5\nI0830 17:43:14.447997    2351 log.go:172] (0x400010e000) Data frame received for 5\nI0830 17:43:14.448511    2351 log.go:172] (0x400010e000) Data frame received for 1\nI0830 17:43:14.448866    2351 log.go:172] (0x4000934280) (5) Data frame handling\nI0830 17:43:14.449538    2351 log.go:172] (0x400010e000) Data frame received for 3\nI0830 17:43:14.449724    2351 log.go:172] (0x4000650500) (3) Data frame handling\nI0830 17:43:14.449864    2351 log.go:172] (0x40009341e0) (1) Data frame handling\nI0830 17:43:14.450920    2351 log.go:172] (0x40009341e0) (1) Data frame sent\nI0830 17:43:14.451120    2351 log.go:172] (0x4000650500) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0830 17:43:14.451345    2351 log.go:172] (0x4000934280) (5) Data frame sent\nI0830 17:43:14.451479    2351 log.go:172] (0x400010e000) Data frame received for 5\nI0830 17:43:14.451565    2351 log.go:172] (0x4000934280) (5) Data frame handling\nI0830 17:43:14.451698    2351 log.go:172] (0x400010e000) Data frame received for 3\nI0830 17:43:14.451766    2351 log.go:172] (0x4000650500) (3) Data frame handling\nI0830 17:43:14.454406    2351 log.go:172] (0x400010e000) (0x40009341e0) Stream removed, broadcasting: 1\nI0830 17:43:14.455479    2351 log.go:172] (0x400010e000) Go away received\nI0830 17:43:14.459218    2351 log.go:172] (0x400010e000) (0x40009341e0) Stream removed, broadcasting: 1\nI0830 17:43:14.459427    2351 log.go:172] (0x400010e000) (0x4000650500) Stream removed, broadcasting: 3\nI0830 17:43:14.459577    2351 log.go:172] (0x400010e000) (0x4000934280) Stream removed, broadcasting: 5\n"
Aug 30 17:43:14.475: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 30 17:43:14.475: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 30 17:43:14.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:43:15.971: INFO: stderr: "I0830 17:43:15.869392    2374 log.go:172] (0x4000864000) (0x400096e1e0) Create stream\nI0830 17:43:15.871926    2374 log.go:172] (0x4000864000) (0x400096e1e0) Stream added, broadcasting: 1\nI0830 17:43:15.880490    2374 log.go:172] (0x4000864000) Reply frame received for 1\nI0830 17:43:15.881060    2374 log.go:172] (0x4000864000) (0x400065a1e0) Create stream\nI0830 17:43:15.881124    2374 log.go:172] (0x4000864000) (0x400065a1e0) Stream added, broadcasting: 3\nI0830 17:43:15.882437    2374 log.go:172] (0x4000864000) Reply frame received for 3\nI0830 17:43:15.882717    2374 log.go:172] (0x4000864000) (0x40003dc000) Create stream\nI0830 17:43:15.882781    2374 log.go:172] (0x4000864000) (0x40003dc000) Stream added, broadcasting: 5\nI0830 17:43:15.883843    2374 log.go:172] (0x4000864000) Reply frame received for 5\nI0830 17:43:15.952570    2374 log.go:172] (0x4000864000) Data frame received for 1\nI0830 17:43:15.952870    2374 log.go:172] (0x4000864000) Data frame received for 3\nI0830 17:43:15.952995    2374 log.go:172] (0x400096e1e0) (1) Data frame handling\nI0830 17:43:15.953123    2374 log.go:172] (0x4000864000) Data frame received for 5\nI0830 17:43:15.953203    2374 log.go:172] (0x40003dc000) (5) Data frame handling\nI0830 17:43:15.953850    2374 log.go:172] (0x400065a1e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0830 17:43:15.954524    2374 log.go:172] (0x400065a1e0) (3) Data frame sent\nI0830 17:43:15.954675    2374 log.go:172] (0x4000864000) Data frame received for 3\nI0830 17:43:15.954728    2374 log.go:172] (0x400065a1e0) (3) Data frame handling\nI0830 17:43:15.954813    2374 log.go:172] (0x400096e1e0) (1) Data frame sent\nI0830 17:43:15.955024    2374 log.go:172] (0x40003dc000) (5) Data frame sent\nI0830 17:43:15.955076    2374 log.go:172] (0x4000864000) Data frame received for 5\nI0830 17:43:15.955129    2374 log.go:172] (0x40003dc000) (5) Data frame handling\nI0830 17:43:15.955629    2374 log.go:172] (0x4000864000) (0x400096e1e0) Stream removed, broadcasting: 1\nI0830 17:43:15.958058    2374 log.go:172] (0x4000864000) Go away received\nI0830 17:43:15.960322    2374 log.go:172] (0x4000864000) (0x400096e1e0) Stream removed, broadcasting: 1\nI0830 17:43:15.960548    2374 log.go:172] (0x4000864000) (0x400065a1e0) Stream removed, broadcasting: 3\nI0830 17:43:15.960682    2374 log.go:172] (0x4000864000) (0x40003dc000) Stream removed, broadcasting: 5\n"
Aug 30 17:43:15.972: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 30 17:43:15.972: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 30 17:43:15.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:43:17.766: INFO: rc: 1
Aug 30 17:43:17.767: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    I0830 17:43:17.350563    2397 log.go:172] (0x400089e000) (0x40008321e0) Create stream
I0830 17:43:17.355685    2397 log.go:172] (0x400089e000) (0x40008321e0) Stream added, broadcasting: 1
I0830 17:43:17.370137    2397 log.go:172] (0x400089e000) Reply frame received for 1
I0830 17:43:17.371186    2397 log.go:172] (0x400089e000) (0x4000832280) Create stream
I0830 17:43:17.371281    2397 log.go:172] (0x400089e000) (0x4000832280) Stream added, broadcasting: 3
I0830 17:43:17.373564    2397 log.go:172] (0x400089e000) Reply frame received for 3
I0830 17:43:17.374113    2397 log.go:172] (0x400089e000) (0x40006621e0) Create stream
I0830 17:43:17.374242    2397 log.go:172] (0x400089e000) (0x40006621e0) Stream added, broadcasting: 5
I0830 17:43:17.375913    2397 log.go:172] (0x400089e000) Reply frame received for 5
I0830 17:43:17.740683    2397 log.go:172] (0x400089e000) Data frame received for 1
I0830 17:43:17.741617    2397 log.go:172] (0x40008321e0) (1) Data frame handling
I0830 17:43:17.743072    2397 log.go:172] (0x400089e000) (0x4000832280) Stream removed, broadcasting: 3
I0830 17:43:17.744291    2397 log.go:172] (0x400089e000) (0x40006621e0) Stream removed, broadcasting: 5
I0830 17:43:17.746855    2397 log.go:172] (0x40008321e0) (1) Data frame sent
I0830 17:43:17.748004    2397 log.go:172] (0x400089e000) (0x40008321e0) Stream removed, broadcasting: 1
I0830 17:43:17.749760    2397 log.go:172] (0x400089e000) Go away received
I0830 17:43:17.752838    2397 log.go:172] (0x400089e000) (0x40008321e0) Stream removed, broadcasting: 1
I0830 17:43:17.753425    2397 log.go:172] (0x400089e000) (0x4000832280) Stream removed, broadcasting: 3
I0830 17:43:17.753488    2397 log.go:172] (0x400089e000) (0x40006621e0) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "f0d55c440627cb753a8f84c274a0ae1b8beb5deae1ccb34946b0ab704842843c": task 82e2af9639ab6ac0cb916519c3bacbcdf2e613ecdcb4228daf067e9c00e41bad not found: not found
 []  0x4001e352f0 exit status 1   true [0x4000011a30 0x4000011ad8 0x4000011bd0] [0x4000011a30 0x4000011ad8 0x4000011bd0] [0x4000011a78 0x4000011b88] [0xad5158 0xad5158] 0x4002d3e1e0 }:
Command stdout:

stderr:
I0830 17:43:17.350563    2397 log.go:172] (0x400089e000) (0x40008321e0) Create stream
I0830 17:43:17.355685    2397 log.go:172] (0x400089e000) (0x40008321e0) Stream added, broadcasting: 1
I0830 17:43:17.370137    2397 log.go:172] (0x400089e000) Reply frame received for 1
I0830 17:43:17.371186    2397 log.go:172] (0x400089e000) (0x4000832280) Create stream
I0830 17:43:17.371281    2397 log.go:172] (0x400089e000) (0x4000832280) Stream added, broadcasting: 3
I0830 17:43:17.373564    2397 log.go:172] (0x400089e000) Reply frame received for 3
I0830 17:43:17.374113    2397 log.go:172] (0x400089e000) (0x40006621e0) Create stream
I0830 17:43:17.374242    2397 log.go:172] (0x400089e000) (0x40006621e0) Stream added, broadcasting: 5
I0830 17:43:17.375913    2397 log.go:172] (0x400089e000) Reply frame received for 5
I0830 17:43:17.740683    2397 log.go:172] (0x400089e000) Data frame received for 1
I0830 17:43:17.741617    2397 log.go:172] (0x40008321e0) (1) Data frame handling
I0830 17:43:17.743072    2397 log.go:172] (0x400089e000) (0x4000832280) Stream removed, broadcasting: 3
I0830 17:43:17.744291    2397 log.go:172] (0x400089e000) (0x40006621e0) Stream removed, broadcasting: 5
I0830 17:43:17.746855    2397 log.go:172] (0x40008321e0) (1) Data frame sent
I0830 17:43:17.748004    2397 log.go:172] (0x400089e000) (0x40008321e0) Stream removed, broadcasting: 1
I0830 17:43:17.749760    2397 log.go:172] (0x400089e000) Go away received
I0830 17:43:17.752838    2397 log.go:172] (0x400089e000) (0x40008321e0) Stream removed, broadcasting: 1
I0830 17:43:17.753425    2397 log.go:172] (0x400089e000) (0x4000832280) Stream removed, broadcasting: 3
I0830 17:43:17.753488    2397 log.go:172] (0x400089e000) (0x40006621e0) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "f0d55c440627cb753a8f84c274a0ae1b8beb5deae1ccb34946b0ab704842843c": task 82e2af9639ab6ac0cb916519c3bacbcdf2e613ecdcb4228daf067e9c00e41bad not found: not found

error:
exit status 1
Aug 30 17:43:27.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:43:29.074: INFO: rc: 1
Aug 30 17:43:29.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001e353b0 exit status 1   true [0x4000011bf8 0x4000011cf0 0x4000011d50] [0x4000011bf8 0x4000011cf0 0x4000011d50] [0x4000011c88 0x4000011d48] [0xad5158 0xad5158] 0x4002d3e780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:43:39.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:43:40.358: INFO: rc: 1
Aug 30 17:43:40.358: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400286a4e0 exit status 1   true [0x400267c540 0x400267c558 0x400267c580] [0x400267c540 0x400267c558 0x400267c580] [0x400267c550 0x400267c570] [0xad5158 0xad5158] 0x40034af080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:43:50.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:43:51.667: INFO: rc: 1
Aug 30 17:43:51.667: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400286a5a0 exit status 1   true [0x400267c590 0x400267c600 0x400267c640] [0x400267c590 0x400267c600 0x400267c640] [0x400267c5e0 0x400267c630] [0xad5158 0xad5158] 0x40034af3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:44:01.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:44:02.960: INFO: rc: 1
Aug 30 17:44:02.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400226fe00 exit status 1   true [0x40025fc1a8 0x40025fc1c0 0x40025fc1d8] [0x40025fc1a8 0x40025fc1c0 0x40025fc1d8] [0x40025fc1b8 0x40025fc1d0] [0xad5158 0xad5158] 0x4001d05bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:44:12.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:44:14.254: INFO: rc: 1
Aug 30 17:44:14.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400226fec0 exit status 1   true [0x40025fc1e0 0x40025fc1f8 0x40025fc210] [0x40025fc1e0 0x40025fc1f8 0x40025fc210] [0x40025fc1f0 0x40025fc208] [0xad5158 0xad5158] 0x400300c2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:44:24.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:44:25.514: INFO: rc: 1
Aug 30 17:44:25.515: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b94090 exit status 1   true [0x400019ff40 0x4000a062f0 0x4000a06648] [0x400019ff40 0x4000a062f0 0x4000a06648] [0x4000a06228 0x4000a06430] [0xad5158 0xad5158] 0x4001d04360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:44:35.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:44:36.890: INFO: rc: 1
Aug 30 17:44:36.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002f800c0 exit status 1   true [0x40006710c8 0x40006712a8 0x4000671400] [0x40006710c8 0x40006712a8 0x4000671400] [0x40006711b0 0x40006713a8] [0xad5158 0xad5158] 0x4002300960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:44:46.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:44:48.160: INFO: rc: 1
Aug 30 17:44:48.160: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40027ce090 exit status 1   true [0x400267c018 0x400267c030 0x400267c080] [0x400267c018 0x400267c030 0x400267c080] [0x400267c028 0x400267c050] [0xad5158 0xad5158] 0x4002dac420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:44:58.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:44:59.500: INFO: rc: 1
Aug 30 17:44:59.501: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b941b0 exit status 1   true [0x4000a06778 0x4000a06b70 0x4000a073f0] [0x4000a06778 0x4000a06b70 0x4000a073f0] [0x4000a06b20 0x4000a07070] [0xad5158 0xad5158] 0x4001d04660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:45:09.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:45:10.908: INFO: rc: 1
Aug 30 17:45:10.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b942a0 exit status 1   true [0x4000a07600 0x4000a07730 0x4000a07a68] [0x4000a07600 0x4000a07730 0x4000a07a68] [0x4000a076d0 0x4000a079a8] [0xad5158 0xad5158] 0x4001d04ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:45:20.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:45:22.211: INFO: rc: 1
Aug 30 17:45:22.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b94360 exit status 1   true [0x4000a07b30 0x4000a07cd0 0x4000a07d98] [0x4000a07b30 0x4000a07cd0 0x4000a07d98] [0x4000a07c88 0x4000a07ce8] [0xad5158 0xad5158] 0x4001d05140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:45:32.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:45:33.490: INFO: rc: 1
Aug 30 17:45:33.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40027ce180 exit status 1   true [0x400267c0a8 0x400267c0f0 0x400267c170] [0x400267c0a8 0x400267c0f0 0x400267c170] [0x400267c0e8 0x400267c150] [0xad5158 0xad5158] 0x4002dacba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:45:43.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:45:44.758: INFO: rc: 1
Aug 30 17:45:44.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001dd60f0 exit status 1   true [0x4000010220 0x40000105c0 0x4000010978] [0x4000010220 0x40000105c0 0x4000010978] [0x4000010388 0x40000106c8] [0xad5158 0xad5158] 0x40029b8540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:45:54.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:45:56.014: INFO: rc: 1
Aug 30 17:45:56.014: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b94420 exit status 1   true [0x4000a07de0 0x4000a07fb0 0x40025fc010] [0x4000a07de0 0x4000a07fb0 0x40025fc010] [0x4000a07f00 0x40025fc008] [0xad5158 0xad5158] 0x4001d05680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:46:06.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:46:07.346: INFO: rc: 1
Aug 30 17:46:07.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001dd61b0 exit status 1   true [0x4000010f08 0x40000112e8 0x40000115d0] [0x4000010f08 0x40000112e8 0x40000115d0] [0x40000111b8 0x4000011580] [0xad5158 0xad5158] 0x40029b8ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:46:17.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:46:18.629: INFO: rc: 1
Aug 30 17:46:18.631: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001dd6270 exit status 1   true [0x4000011610 0x4000011660 0x40000117a0] [0x4000011610 0x4000011660 0x40000117a0] [0x4000011650 0x4000011790] [0xad5158 0xad5158] 0x40029b9200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:46:28.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:46:29.958: INFO: rc: 1
Aug 30 17:46:29.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b940c0 exit status 1   true [0x4000a061b8 0x4000a06328 0x4000a06778] [0x4000a061b8 0x4000a06328 0x4000a06778] [0x4000a062f0 0x4000a06648] [0xad5158 0xad5158] 0x4001d04360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:46:39.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:46:41.244: INFO: rc: 1
Aug 30 17:46:41.244: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40027ce0c0 exit status 1   true [0x400019ff40 0x40025fc008 0x40025fc020] [0x400019ff40 0x40025fc008 0x40025fc020] [0x40025fc000 0x40025fc018] [0xad5158 0xad5158] 0x4002dac420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:46:51.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:46:52.537: INFO: rc: 1
Aug 30 17:46:52.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002f800f0 exit status 1   true [0x400267c018 0x400267c030 0x400267c080] [0x400267c018 0x400267c030 0x400267c080] [0x400267c028 0x400267c050] [0xad5158 0xad5158] 0x40023008a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:47:02.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:47:03.851: INFO: rc: 1
Aug 30 17:47:03.852: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b941e0 exit status 1   true [0x4000a06a30 0x4000a06f18 0x4000a07600] [0x4000a06a30 0x4000a06f18 0x4000a07600] [0x4000a06b70 0x4000a073f0] [0xad5158 0xad5158] 0x4001d04720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:47:13.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:47:15.106: INFO: rc: 1
Aug 30 17:47:15.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002f801b0 exit status 1   true [0x400267c0a8 0x400267c0f0 0x400267c170] [0x400267c0a8 0x400267c0f0 0x400267c170] [0x400267c0e8 0x400267c150] [0xad5158 0xad5158] 0x4002300f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:47:25.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:47:26.424: INFO: rc: 1
Aug 30 17:47:26.424: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40027ce1e0 exit status 1   true [0x40025fc028 0x40025fc040 0x40025fc058] [0x40025fc028 0x40025fc040 0x40025fc058] [0x40025fc038 0x40025fc050] [0xad5158 0xad5158] 0x4002dacba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:47:36.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:47:37.686: INFO: rc: 1
Aug 30 17:47:37.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b94330 exit status 1   true [0x4000a07678 0x4000a07790 0x4000a07b30] [0x4000a07678 0x4000a07790 0x4000a07b30] [0x4000a07730 0x4000a07a68] [0xad5158 0xad5158] 0x4001d04ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:47:47.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:47:48.977: INFO: rc: 1
Aug 30 17:47:48.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40027ce2d0 exit status 1   true [0x40025fc060 0x40025fc078 0x40025fc090] [0x40025fc060 0x40025fc078 0x40025fc090] [0x40025fc070 0x40025fc088] [0xad5158 0xad5158] 0x4002dad7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:47:58.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:48:00.289: INFO: rc: 1
Aug 30 17:48:00.289: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002f802d0 exit status 1   true [0x400267c198 0x400267c1d8 0x400267c210] [0x400267c198 0x400267c1d8 0x400267c210] [0x400267c1d0 0x400267c1f0] [0xad5158 0xad5158] 0x4002301860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:48:10.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:48:11.605: INFO: rc: 1
Aug 30 17:48:11.605: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4001b944e0 exit status 1   true [0x4000a07be8 0x4000a07ce0 0x4000a07de0] [0x4000a07be8 0x4000a07ce0 0x4000a07de0] [0x4000a07cd0 0x4000a07d98] [0xad5158 0xad5158] 0x4001d051a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 30 17:48:21.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1705 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 30 17:48:22.915: INFO: rc: 1
Aug 30 17:48:22.915: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Aug 30 17:48:22.915: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 30 17:48:22.935: INFO: Deleting all statefulset in ns statefulset-1705
Aug 30 17:48:22.939: INFO: Scaling statefulset ss to 0
Aug 30 17:48:22.950: INFO: Waiting for statefulset status.replicas updated to 0
Aug 30 17:48:22.954: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:48:22.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1705" for this suite.
Aug 30 17:48:29.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:48:29.135: INFO: namespace statefulset-1705 deletion completed in 6.150888177s

• [SLOW TEST:388.127 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:48:29.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8182
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 30 17:48:29.214: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 30 17:48:51.397: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.247:8080/dial?request=hostName&protocol=http&host=10.244.1.194&port=8080&tries=1'] Namespace:pod-network-test-8182 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 30 17:48:51.397: INFO: >>> kubeConfig: /root/.kube/config
I0830 17:48:51.461642       7 log.go:172] (0x400151e000) (0x40020be780) Create stream
I0830 17:48:51.461822       7 log.go:172] (0x400151e000) (0x40020be780) Stream added, broadcasting: 1
I0830 17:48:51.465868       7 log.go:172] (0x400151e000) Reply frame received for 1
I0830 17:48:51.466017       7 log.go:172] (0x400151e000) (0x40020be820) Create stream
I0830 17:48:51.466095       7 log.go:172] (0x400151e000) (0x40020be820) Stream added, broadcasting: 3
I0830 17:48:51.468151       7 log.go:172] (0x400151e000) Reply frame received for 3
I0830 17:48:51.468347       7 log.go:172] (0x400151e000) (0x40020be8c0) Create stream
I0830 17:48:51.468454       7 log.go:172] (0x400151e000) (0x40020be8c0) Stream added, broadcasting: 5
I0830 17:48:51.470266       7 log.go:172] (0x400151e000) Reply frame received for 5
I0830 17:48:51.568329       7 log.go:172] (0x400151e000) Data frame received for 3
I0830 17:48:51.568554       7 log.go:172] (0x40020be820) (3) Data frame handling
I0830 17:48:51.568885       7 log.go:172] (0x40020be820) (3) Data frame sent
I0830 17:48:51.569048       7 log.go:172] (0x400151e000) Data frame received for 3
I0830 17:48:51.569234       7 log.go:172] (0x40020be820) (3) Data frame handling
I0830 17:48:51.569448       7 log.go:172] (0x400151e000) Data frame received for 5
I0830 17:48:51.569607       7 log.go:172] (0x40020be8c0) (5) Data frame handling
I0830 17:48:51.571082       7 log.go:172] (0x400151e000) Data frame received for 1
I0830 17:48:51.571236       7 log.go:172] (0x40020be780) (1) Data frame handling
I0830 17:48:51.571373       7 log.go:172] (0x40020be780) (1) Data frame sent
I0830 17:48:51.571523       7 log.go:172] (0x400151e000) (0x40020be780) Stream removed, broadcasting: 1
I0830 17:48:51.571677       7 log.go:172] (0x400151e000) Go away received
I0830 17:48:51.572049       7 log.go:172] (0x400151e000) (0x40020be780) Stream removed, broadcasting: 1
I0830 17:48:51.572180       7 log.go:172] (0x400151e000) (0x40020be820) Stream removed, broadcasting: 3
I0830 17:48:51.572288       7 log.go:172] (0x400151e000) (0x40020be8c0) Stream removed, broadcasting: 5
Aug 30 17:48:51.572: INFO: Waiting for endpoints: map[]
Aug 30 17:48:51.578: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.247:8080/dial?request=hostName&protocol=http&host=10.244.2.246&port=8080&tries=1'] Namespace:pod-network-test-8182 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 30 17:48:51.578: INFO: >>> kubeConfig: /root/.kube/config
I0830 17:48:51.635011       7 log.go:172] (0x4001938210) (0x400045fa40) Create stream
I0830 17:48:51.635210       7 log.go:172] (0x4001938210) (0x400045fa40) Stream added, broadcasting: 1
I0830 17:48:51.638717       7 log.go:172] (0x4001938210) Reply frame received for 1
I0830 17:48:51.638955       7 log.go:172] (0x4001938210) (0x400360a0a0) Create stream
I0830 17:48:51.639042       7 log.go:172] (0x4001938210) (0x400360a0a0) Stream added, broadcasting: 3
I0830 17:48:51.640459       7 log.go:172] (0x4001938210) Reply frame received for 3
I0830 17:48:51.640592       7 log.go:172] (0x4001938210) (0x400360a140) Create stream
I0830 17:48:51.640652       7 log.go:172] (0x4001938210) (0x400360a140) Stream added, broadcasting: 5
I0830 17:48:51.641956       7 log.go:172] (0x4001938210) Reply frame received for 5
I0830 17:48:51.719820       7 log.go:172] (0x4001938210) Data frame received for 3
I0830 17:48:51.719999       7 log.go:172] (0x400360a0a0) (3) Data frame handling
I0830 17:48:51.720122       7 log.go:172] (0x400360a0a0) (3) Data frame sent
I0830 17:48:51.720263       7 log.go:172] (0x4001938210) Data frame received for 5
I0830 17:48:51.720428       7 log.go:172] (0x400360a140) (5) Data frame handling
I0830 17:48:51.720527       7 log.go:172] (0x4001938210) Data frame received for 3
I0830 17:48:51.720604       7 log.go:172] (0x400360a0a0) (3) Data frame handling
I0830 17:48:51.724494       7 log.go:172] (0x4001938210) Data frame received for 1
I0830 17:48:51.724637       7 log.go:172] (0x400045fa40) (1) Data frame handling
I0830 17:48:51.724953       7 log.go:172] (0x400045fa40) (1) Data frame sent
I0830 17:48:51.725105       7 log.go:172] (0x4001938210) (0x400045fa40) Stream removed, broadcasting: 1
I0830 17:48:51.725278       7 log.go:172] (0x4001938210) Go away received
I0830 17:48:51.726011       7 log.go:172] (0x4001938210) (0x400045fa40) Stream removed, broadcasting: 1
I0830 17:48:51.726156       7 log.go:172] (0x4001938210) (0x400360a0a0) Stream removed, broadcasting: 3
I0830 17:48:51.726235       7 log.go:172] (0x4001938210) (0x400360a140) Stream removed, broadcasting: 5
Aug 30 17:48:51.726: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:48:51.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8182" for this suite.
Aug 30 17:49:13.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:49:13.897: INFO: namespace pod-network-test-8182 deletion completed in 22.164030472s

• [SLOW TEST:44.760 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:49:13.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Aug 30 17:49:13.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 30 17:49:15.216: INFO: stderr: ""
Aug 30 17:49:15.217: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:49:15.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8775" for this suite.
Aug 30 17:49:21.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:49:21.390: INFO: namespace kubectl-8775 deletion completed in 6.16666398s

• [SLOW TEST:7.492 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:49:21.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Aug 30 17:49:21.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8210'
Aug 30 17:49:23.356: INFO: stderr: ""
Aug 30 17:49:23.356: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Aug 30 17:49:24.364: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:49:24.364: INFO: Found 0 / 1
Aug 30 17:49:25.365: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:49:25.365: INFO: Found 0 / 1
Aug 30 17:49:26.640: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:49:26.640: INFO: Found 0 / 1
Aug 30 17:49:27.413: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:49:27.413: INFO: Found 0 / 1
Aug 30 17:49:28.364: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:49:28.364: INFO: Found 0 / 1
Aug 30 17:49:29.363: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:49:29.363: INFO: Found 1 / 1
Aug 30 17:49:29.363: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 30 17:49:29.368: INFO: Selector matched 1 pods for map[app:redis]
Aug 30 17:49:29.368: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 30 17:49:29.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5l4fw redis-master --namespace=kubectl-8210'
Aug 30 17:49:30.691: INFO: stderr: ""
Aug 30 17:49:30.691: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Aug 17:49:28.337 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Aug 17:49:28.337 # Server started, Redis version 3.2.12\n1:M 30 Aug 17:49:28.337 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Aug 17:49:28.337 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 30 17:49:30.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5l4fw redis-master --namespace=kubectl-8210 --tail=1'
Aug 30 17:49:32.002: INFO: stderr: ""
Aug 30 17:49:32.002: INFO: stdout: "1:M 30 Aug 17:49:28.337 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 30 17:49:32.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5l4fw redis-master --namespace=kubectl-8210 --limit-bytes=1'
Aug 30 17:49:33.315: INFO: stderr: ""
Aug 30 17:49:33.315: INFO: stdout: " "
STEP: exposing timestamps
Aug 30 17:49:33.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5l4fw redis-master --namespace=kubectl-8210 --tail=1 --timestamps'
Aug 30 17:49:34.621: INFO: stderr: ""
Aug 30 17:49:34.621: INFO: stdout: "2020-08-30T17:49:28.337822625Z 1:M 30 Aug 17:49:28.337 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 30 17:49:37.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5l4fw redis-master --namespace=kubectl-8210 --since=1s'
Aug 30 17:49:38.405: INFO: stderr: ""
Aug 30 17:49:38.405: INFO: stdout: ""
Aug 30 17:49:38.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5l4fw redis-master --namespace=kubectl-8210 --since=24h'
Aug 30 17:49:39.743: INFO: stderr: ""
Aug 30 17:49:39.743: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Aug 17:49:28.337 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Aug 17:49:28.337 # Server started, Redis version 3.2.12\n1:M 30 Aug 17:49:28.337 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Aug 17:49:28.337 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Aug 30 17:49:39.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8210'
Aug 30 17:49:41.020: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 30 17:49:41.021: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 30 17:49:41.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8210'
Aug 30 17:49:42.309: INFO: stderr: "No resources found.\n"
Aug 30 17:49:42.309: INFO: stdout: ""
Aug 30 17:49:42.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8210 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 30 17:49:43.607: INFO: stderr: ""
Aug 30 17:49:43.607: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:49:43.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8210" for this suite.
Aug 30 17:49:49.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:49:49.783: INFO: namespace kubectl-8210 deletion completed in 6.165671475s

• [SLOW TEST:28.392 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:49:49.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 30 17:50:00.023: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:00.032: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:02.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:02.041: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:04.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:04.040: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:06.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:06.040: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:08.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:08.040: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:10.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:10.041: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:12.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:12.040: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:14.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:14.040: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:16.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:16.039: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:18.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:18.039: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 30 17:50:20.033: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 30 17:50:20.047: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:50:20.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-922" for this suite.
Aug 30 17:50:42.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:50:42.238: INFO: namespace container-lifecycle-hook-922 deletion completed in 22.173637991s

• [SLOW TEST:52.453 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:50:42.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 30 17:50:47.387: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 30 17:50:47.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3341" for this suite.
Aug 30 17:50:53.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 30 17:50:53.638: INFO: namespace container-runtime-3341 deletion completed in 6.169850914s

• [SLOW TEST:11.399 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 30 17:50:53.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug 30 17:50:53.722: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 30 17:50:53.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8860'
Aug 30 17:50:55.403: INFO: stderr: ""
Aug 30 17:50:55.403: INFO: stdout: "service/redis-slave created\n"
Aug 30 17:50:55.404: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 30 17:50:55.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8860'
Aug 30 17:50:57.139: INFO: stderr: ""
Aug 30 17:50:57.140: INFO: stdout: "service/redis-master created\n"
Aug 30 17:50:57.141: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 30 17:50:57.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8860'
Aug 30 17:50:58.895: INFO: stderr: ""
Aug 30 17:50:58.895: INFO: stdout: "service/frontend created\n"
Aug 30 17:50:58.897: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 30 17:50:58.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8860'
Aug 30 17:51:00.698: INFO: stderr: ""
Aug 30 17:51:00.698: INFO: stdout: "deployment.apps/frontend created\n"
Aug 30 17:51:00.699: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 30 17:51:00.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8860'
Aug 30 17:51:02.492: INFO: stderr: ""
Aug 30 17:51:02.492: INFO: stdout: "deployment.apps/redis-master created\n"
Aug 30 17:51:02.494: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 30 17:51:02.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8860'
Aug 30 17:51:04.547: INFO: stderr: ""
Aug 30 17:51:04.548: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug 30 17:51:04.548: INFO: Waiting for all frontend pods to be Running.
Aug 30 17:51:09.600: INFO: Waiting for frontend to serve content.
Aug 30 17:51:10.712: INFO: Failed to get response from guestbook. err: , response: 
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Aug 30 17:51:15.733: INFO: Trying to add a new entry to the guestbook. Aug 30 17:51:15.751: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 30 17:51:15.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8860' Aug 30 17:51:17.202: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 17:51:17.202: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 30 17:51:17.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8860' Aug 30 17:51:18.543: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 17:51:18.543: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 30 17:51:18.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8860' Aug 30 17:51:19.914: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 17:51:19.914: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 30 17:51:19.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8860' Aug 30 17:51:21.239: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 17:51:21.239: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 30 17:51:21.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8860' Aug 30 17:51:22.629: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 17:51:22.629: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 30 17:51:22.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8860' Aug 30 17:51:24.128: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 17:51:24.128: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:51:24.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8860" for this suite. Aug 30 17:52:04.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:52:04.736: INFO: namespace kubectl-8860 deletion completed in 40.208243531s • [SLOW TEST:71.097 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:52:04.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-a65c7b64-8db6-457d-ab59-b58205886a9b STEP: Creating a pod to test consume secrets Aug 30 17:52:04.836: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca" in namespace "projected-1992" to be "success or failure" Aug 30 17:52:04.851: INFO: Pod "pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.707057ms Aug 30 17:52:06.857: INFO: Pod "pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02147447s Aug 30 17:52:08.863: INFO: Pod "pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027209635s Aug 30 17:52:10.870: INFO: Pod "pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03375356s STEP: Saw pod success Aug 30 17:52:10.870: INFO: Pod "pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca" satisfied condition "success or failure" Aug 30 17:52:10.875: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca container projected-secret-volume-test: STEP: delete the pod Aug 30 17:52:10.907: INFO: Waiting for pod pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca to disappear Aug 30 17:52:10.953: INFO: Pod pod-projected-secrets-bc1a3f23-d384-49d2-b312-881135841fca no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:52:10.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1992" for this suite. Aug 30 17:52:17.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:52:17.168: INFO: namespace projected-1992 deletion completed in 6.205247484s • [SLOW TEST:12.431 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:52:17.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0830 17:52:28.604459 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 30 17:52:28.604: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:52:28.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3583" for this suite. Aug 30 17:52:36.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:52:36.805: INFO: namespace gc-3583 deletion completed in 8.193052865s • [SLOW TEST:19.637 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:52:36.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 17:52:36.967: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1" in namespace "projected-4047" to be "success or failure" Aug 30 17:52:37.038: INFO: Pod "downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 70.666271ms Aug 30 17:52:39.045: INFO: Pod "downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077783833s Aug 30 17:52:41.050: INFO: Pod "downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083328762s STEP: Saw pod success Aug 30 17:52:41.051: INFO: Pod "downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1" satisfied condition "success or failure" Aug 30 17:52:41.055: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1 container client-container: STEP: delete the pod Aug 30 17:52:41.209: INFO: Waiting for pod downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1 to disappear Aug 30 17:52:41.215: INFO: Pod downwardapi-volume-1ba30558-67b9-496e-acc3-cfaa3322b3a1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:52:41.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4047" for this suite. Aug 30 17:52:47.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:52:47.399: INFO: namespace projected-4047 deletion completed in 6.176626007s • [SLOW TEST:10.593 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:52:47.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2273, will wait for the garbage collector to delete the pods Aug 30 17:52:53.550: INFO: Deleting Job.batch foo took: 9.51796ms Aug 30 17:52:53.851: INFO: Terminating Job.batch foo pods took: 300.835546ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:53:33.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2273" for this suite. Aug 30 17:53:39.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:53:39.838: INFO: namespace job-2273 deletion completed in 6.155348612s • [SLOW TEST:52.438 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:53:39.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Aug 30 17:53:39.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1165 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 30 17:53:48.370: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0830 17:53:48.209727 3541 log.go:172] (0x40001113f0) (0x400079a5a0) Create stream\nI0830 17:53:48.212678 3541 log.go:172] (0x40001113f0) (0x400079a5a0) Stream added, broadcasting: 1\nI0830 17:53:48.235866 3541 log.go:172] (0x40001113f0) Reply frame received for 1\nI0830 17:53:48.236533 3541 log.go:172] (0x40001113f0) (0x4000433860) Create stream\nI0830 17:53:48.236631 3541 log.go:172] (0x40001113f0) (0x4000433860) Stream added, broadcasting: 3\nI0830 17:53:48.238260 3541 log.go:172] (0x40001113f0) Reply frame received for 3\nI0830 17:53:48.238605 3541 log.go:172] (0x40001113f0) (0x400079a0a0) Create stream\nI0830 17:53:48.238688 3541 log.go:172] (0x40001113f0) (0x400079a0a0) Stream added, broadcasting: 5\nI0830 17:53:48.240225 3541 log.go:172] (0x40001113f0) Reply frame received for 5\nI0830 17:53:48.240581 3541 log.go:172] (0x40001113f0) (0x400037e000) Create stream\nI0830 17:53:48.240649 3541 log.go:172] (0x40001113f0) (0x400037e000) Stream added, broadcasting: 7\nI0830 17:53:48.241983 3541 log.go:172] (0x40001113f0) Reply frame received for 7\nI0830 17:53:48.245049 3541 log.go:172] (0x4000433860) (3) Writing data frame\nI0830 17:53:48.246391 3541 log.go:172] (0x4000433860) (3) Writing data frame\nI0830 17:53:48.247426 3541 log.go:172] (0x40001113f0) Data frame received for 5\nI0830 17:53:48.247635 3541 log.go:172] (0x400079a0a0) (5) Data frame handling\nI0830 17:53:48.247961 3541 log.go:172] (0x400079a0a0) (5) Data frame sent\nI0830 17:53:48.248343 3541 log.go:172] (0x40001113f0) Data frame received for 5\nI0830 17:53:48.248407 3541 log.go:172] (0x400079a0a0) (5) Data frame handling\nI0830 17:53:48.248479 3541 log.go:172] (0x400079a0a0) (5) Data frame sent\nI0830 17:53:48.295793 3541 log.go:172] (0x40001113f0) Data frame received for 7\nI0830 17:53:48.295981 3541 log.go:172] (0x400037e000) (7) Data frame handling\nI0830 17:53:48.296358 3541 log.go:172] (0x40001113f0) Data frame received for 5\nI0830 17:53:48.296597 3541 log.go:172] (0x400079a0a0) (5) Data frame handling\nI0830 17:53:48.296860 3541 log.go:172] (0x40001113f0) Data frame received for 1\nI0830 17:53:48.296984 3541 log.go:172] (0x400079a5a0) (1) Data frame handling\nI0830 17:53:48.297092 3541 log.go:172] (0x400079a5a0) (1) Data frame sent\nI0830 17:53:48.298536 3541 log.go:172] (0x40001113f0) (0x400079a5a0) Stream removed, broadcasting: 1\nI0830 17:53:48.298996 3541 log.go:172] (0x40001113f0) (0x4000433860) Stream removed, broadcasting: 3\nI0830 17:53:48.299971 3541 log.go:172] (0x40001113f0) (0x400079a5a0) Stream removed, broadcasting: 1\nI0830 17:53:48.300621 3541 log.go:172] (0x40001113f0) (0x4000433860) Stream removed, broadcasting: 3\nI0830 17:53:48.300714 3541 log.go:172] (0x40001113f0) (0x400079a0a0) Stream removed, broadcasting: 5\nI0830 17:53:48.301948 3541 log.go:172] (0x40001113f0) Go away received\nI0830 17:53:48.302395 3541 log.go:172] (0x40001113f0) (0x400037e000) Stream removed, broadcasting: 7\n" Aug 30 17:53:48.372: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:53:50.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1165" for this suite. Aug 30 17:53:56.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:53:56.568: INFO: namespace kubectl-1165 deletion completed in 6.172476309s • [SLOW TEST:16.725 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:53:56.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e084234d-2d5f-4ded-9b0f-2925812f329e STEP: Creating a pod to test consume configMaps Aug 30 17:53:56.700: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39" in namespace "configmap-4011" to be "success or failure" Aug 30 17:53:56.710: INFO: Pod "pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088883ms Aug 30 17:53:58.937: INFO: Pod "pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237345006s Aug 30 17:54:00.945: INFO: Pod "pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245159188s STEP: Saw pod success Aug 30 17:54:00.945: INFO: Pod "pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39" satisfied condition "success or failure" Aug 30 17:54:01.129: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39 container configmap-volume-test: STEP: delete the pod Aug 30 17:54:01.315: INFO: Waiting for pod pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39 to disappear Aug 30 17:54:01.475: INFO: Pod pod-configmaps-d9848129-d1d3-4f80-b801-0752f7ad2a39 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:54:01.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4011" for this suite. Aug 30 17:54:07.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:54:07.653: INFO: namespace configmap-4011 deletion completed in 6.167554373s • [SLOW TEST:11.083 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:54:07.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Aug 30 17:54:07.716: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 30 17:54:07.736: INFO: Waiting for terminating namespaces to be deleted... Aug 30 17:54:07.743: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Aug 30 17:54:07.773: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.773: INFO: Container app ready: true, restart count 0 Aug 30 17:54:07.773: INFO: daemon-set-6z8rp from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.773: INFO: Container app ready: true, restart count 0 Aug 30 17:54:07.773: INFO: cassandra-76f5c4d86c-hd7ww from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.773: INFO: Container cassandra ready: true, restart count 0 Aug 30 17:54:07.773: INFO: homer-74dd4556d9-q6gxg from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.773: INFO: Container homer ready: true, restart count 0 Aug 30 17:54:07.773: INFO: sprout-686cc64cfb-6vw8x from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (2 container statuses recorded) Aug 30 17:54:07.773: INFO: Container sprout ready: true, restart count 0 Aug 30 17:54:07.773: INFO: Container tailer ready: true, restart count 0 Aug 30 17:54:07.773: INFO: homestead-prov-756c8bff5d-zvxsr from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.773: INFO: Container homestead-prov ready: true, restart count 0 Aug 30 17:54:07.773: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.774: INFO: Container kube-proxy ready: true, restart count 0 Aug 30 17:54:07.774: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.774: INFO: Container app ready: true, restart count 0 Aug 30 17:54:07.774: INFO: ellis-57b84b6dd7-rt8xk from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.774: INFO: Container ellis ready: true, restart count 0 Aug 30 17:54:07.774: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.774: INFO: Container kindnet-cni ready: true, restart count 0 Aug 30 17:54:07.774: INFO: astaire-5ddcdd6b7f-9dgqk from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:54:07.774: INFO: Container astaire ready: true, restart count 0 Aug 30 17:54:07.774: INFO: Container tailer ready: true, restart count 0 Aug 30 17:54:07.774: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Aug 30 17:54:07.810: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.810: INFO: Container app ready: true, restart count 0 Aug 30 17:54:07.810: INFO: homestead-57586d6cdc-zf5g4 from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:54:07.810: INFO: Container homestead ready: true, restart count 0 Aug 30 17:54:07.810: INFO: Container tailer ready: true, restart count 0 Aug 30 17:54:07.810: INFO: ralf-57c4654cb8-xhclj from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (2 container statuses recorded) Aug 30 17:54:07.810: INFO: Container ralf ready: true, restart count 0 Aug 30 17:54:07.810: INFO: Container tailer ready: true, restart count 0 Aug 30 17:54:07.810: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.810: INFO: Container kindnet-cni ready: true, restart count 0 Aug 30 17:54:07.810: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.810: INFO: Container app ready: true, restart count 0 Aug 30 17:54:07.810: INFO: daemon-set-fzgmk from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.810: INFO: Container app ready: true, restart count 0 Aug 30 17:54:07.810: INFO: bono-5cdb7bfcdd-8fpzx from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:54:07.810: INFO: Container bono ready: true, restart count 0 Aug 30 17:54:07.810: INFO: Container tailer ready: true, restart count 0 Aug 30 17:54:07.810: INFO: chronos-687b9884c5-m92fc from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:54:07.810: INFO: Container chronos ready: true, restart count 0 Aug 30 17:54:07.810: INFO: Container tailer ready: true, restart count 0 Aug 30 17:54:07.810: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.810: INFO: Container kube-proxy ready: true, restart count 0 Aug 30 17:54:07.811: INFO: etcd-5cbf55c8c-bmvbb from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.811: INFO: Container etcd ready: true, restart count 0 Aug 30 17:54:07.811: INFO: live-test from ims-c5hpb started at 2020-08-30 10:18:13 +0000 UTC (1 container statuses recorded) Aug 30 17:54:07.811: INFO: Container live-test ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Aug 30 17:54:07.964: INFO: Pod daemon-set-2gkvj requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.964: INFO: Pod daemon-set-hlzh5 requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.964: INFO: Pod daemon-set-6z8rp requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.964: INFO: Pod daemon-set-fzgmk requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.964: INFO: Pod daemon-set-nk8hf requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.965: INFO: Pod daemon-set-qwbvn requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod astaire-5ddcdd6b7f-9dgqk requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod bono-5cdb7bfcdd-8fpzx requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.965: INFO: Pod cassandra-76f5c4d86c-hd7ww requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod chronos-687b9884c5-m92fc requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.965: INFO: Pod ellis-57b84b6dd7-rt8xk requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod etcd-5cbf55c8c-bmvbb requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.965: INFO: Pod homer-74dd4556d9-q6gxg requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod homestead-57586d6cdc-zf5g4 requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.965: INFO: Pod homestead-prov-756c8bff5d-zvxsr requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod ralf-57c4654cb8-xhclj requesting resource cpu=0m on Node iruya-worker2 Aug 30 17:54:07.965: INFO: Pod sprout-686cc64cfb-6vw8x requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod kindnet-nkf5n requesting resource cpu=100m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod kindnet-xsdzz requesting resource cpu=100m on Node iruya-worker2 Aug 30 17:54:07.965: INFO: Pod kube-proxy-5zw8s requesting resource cpu=0m on Node iruya-worker Aug 30 17:54:07.965: INFO: Pod kube-proxy-b98qt requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-60c7ab54-266e-4b5c-8042-8d6cc9be54a0.16301d448d69132a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4268/filler-pod-60c7ab54-266e-4b5c-8042-8d6cc9be54a0 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-60c7ab54-266e-4b5c-8042-8d6cc9be54a0.16301d44dfcfea8f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-60c7ab54-266e-4b5c-8042-8d6cc9be54a0.16301d45522208f3], Reason = [Created], Message = [Created container filler-pod-60c7ab54-266e-4b5c-8042-8d6cc9be54a0] STEP: Considering event: Type = [Normal], Name = [filler-pod-60c7ab54-266e-4b5c-8042-8d6cc9be54a0.16301d4589d3e92b], Reason = [Started], Message = [Started container filler-pod-60c7ab54-266e-4b5c-8042-8d6cc9be54a0] STEP: Considering event: Type = [Normal], Name = [filler-pod-7a258b63-9423-43d6-a389-383df9e3feb8.16301d448f2aa97e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4268/filler-pod-7a258b63-9423-43d6-a389-383df9e3feb8 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7a258b63-9423-43d6-a389-383df9e3feb8.16301d453a1c081f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7a258b63-9423-43d6-a389-383df9e3feb8.16301d459eb21228], Reason = [Created], Message = [Created container filler-pod-7a258b63-9423-43d6-a389-383df9e3feb8] STEP: Considering event: Type = [Normal], Name = [filler-pod-7a258b63-9423-43d6-a389-383df9e3feb8.16301d45ae559277], Reason = [Started], Message = [Started container filler-pod-7a258b63-9423-43d6-a389-383df9e3feb8] STEP: Considering event: Type = [Warning], Name = [additional-pod.16301d45f857935e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:54:15.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4268" for this suite. Aug 30 17:54:21.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:54:21.312: INFO: namespace sched-pred-4268 deletion completed in 6.172873424s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.654 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:54:21.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-385c69c0-5a9f-4f69-9476-b4228d1c069b STEP: Creating a pod to test consume secrets Aug 30 17:54:21.861: INFO: Waiting up to 5m0s for pod "pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437" in namespace "secrets-1342" to be "success or failure" Aug 30 17:54:21.882: INFO: Pod "pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437": Phase="Pending", Reason="", readiness=false. Elapsed: 20.623603ms Aug 30 17:54:24.202: INFO: Pod "pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340579238s Aug 30 17:54:26.231: INFO: Pod "pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437": Phase="Running", Reason="", readiness=true. Elapsed: 4.369658695s Aug 30 17:54:28.239: INFO: Pod "pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.377623864s STEP: Saw pod success Aug 30 17:54:28.239: INFO: Pod "pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437" satisfied condition "success or failure" Aug 30 17:54:28.243: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437 container secret-volume-test: STEP: delete the pod Aug 30 17:54:28.284: INFO: Waiting for pod pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437 to disappear Aug 30 17:54:28.297: INFO: Pod pod-secrets-e560fcf1-ab10-4507-92fd-3c9796799437 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:54:28.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1342" for this suite. Aug 30 17:54:34.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:54:34.504: INFO: namespace secrets-1342 deletion completed in 6.197296518s • [SLOW TEST:13.190 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:54:34.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:54:38.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4197" for this suite. Aug 30 17:55:28.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:55:28.829: INFO: namespace kubelet-test-4197 deletion completed in 50.161434051s • [SLOW TEST:54.324 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:55:28.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 30 17:55:39.050: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:39.057: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:41.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:41.063: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:43.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:43.066: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:45.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:45.065: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:47.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:47.065: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:49.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:49.065: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:51.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:51.064: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:53.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:53.064: INFO: Pod pod-with-poststart-http-hook still exists Aug 30 17:55:55.058: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 30 17:55:55.064: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:55:55.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3921" for this suite. Aug 30 17:56:17.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:56:17.297: INFO: namespace container-lifecycle-hook-3921 deletion completed in 22.224538104s • [SLOW TEST:48.467 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:56:17.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Aug 30 17:56:17.362: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 30 17:56:17.394: INFO: Waiting for terminating namespaces to be deleted... Aug 30 17:56:17.399: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Aug 30 17:56:17.430: INFO: homestead-prov-756c8bff5d-zvxsr from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.430: INFO: Container homestead-prov ready: true, restart count 0 Aug 30 17:56:17.430: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.430: INFO: Container kube-proxy ready: true, restart count 0 Aug 30 17:56:17.430: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.430: INFO: Container app ready: true, restart count 0 Aug 30 17:56:17.430: INFO: ellis-57b84b6dd7-rt8xk from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.430: INFO: Container ellis ready: true, restart count 0 Aug 30 17:56:17.430: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.430: INFO: Container kindnet-cni ready: true, restart count 0 Aug 30 17:56:17.430: INFO: astaire-5ddcdd6b7f-9dgqk from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:56:17.430: INFO: Container astaire ready: true, restart count 0 Aug 30 17:56:17.430: INFO: Container tailer ready: true, restart count 0 Aug 30 17:56:17.430: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.430: INFO: Container app ready: true, restart count 0 Aug 30 17:56:17.430: INFO: sprout-686cc64cfb-6vw8x from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (2 container statuses recorded) Aug 30 17:56:17.430: INFO: Container sprout ready: true, restart count 0 Aug 30 17:56:17.430: INFO: Container tailer ready: true, restart count 0 Aug 30 17:56:17.431: INFO: daemon-set-6z8rp from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.431: INFO: Container app ready: true, restart count 0 Aug 30 17:56:17.431: INFO: cassandra-76f5c4d86c-hd7ww from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.431: INFO: Container cassandra ready: true, restart count 0 Aug 30 17:56:17.431: INFO: homer-74dd4556d9-q6gxg from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.431: INFO: Container homer ready: true, restart count 0 Aug 30 17:56:17.431: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Aug 30 17:56:17.462: INFO: etcd-5cbf55c8c-bmvbb from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.462: INFO: Container etcd ready: true, restart count 0 Aug 30 17:56:17.462: INFO: live-test from ims-c5hpb started at 2020-08-30 10:18:13 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.462: INFO: Container live-test ready: false, restart count 0 Aug 30 17:56:17.462: INFO: ralf-57c4654cb8-xhclj from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (2 container statuses recorded) Aug 30 17:56:17.462: INFO: Container ralf ready: true, restart count 0 Aug 30 17:56:17.462: INFO: Container tailer ready: true, restart count 0 Aug 30 17:56:17.462: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.463: INFO: Container app ready: true, restart count 0 Aug 30 17:56:17.463: INFO: homestead-57586d6cdc-zf5g4 from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:56:17.463: INFO: Container homestead ready: true, restart count 0 Aug 30 17:56:17.463: INFO: Container tailer ready: true, restart count 0 Aug 30 17:56:17.463: INFO: daemon-set-fzgmk from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.463: INFO: Container app ready: true, restart count 0 Aug 30 17:56:17.463: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.463: INFO: Container kindnet-cni ready: true, restart count 0 Aug 30 17:56:17.463: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.463: INFO: Container app ready: true, restart count 0 Aug 30 17:56:17.463: INFO: bono-5cdb7bfcdd-8fpzx from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:56:17.463: INFO: Container bono ready: true, restart count 0 Aug 30 17:56:17.463: INFO: Container tailer ready: true, restart count 0 Aug 30 17:56:17.463: INFO: chronos-687b9884c5-m92fc from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 17:56:17.463: INFO: Container chronos ready: true, restart count 0 Aug 30 17:56:17.463: INFO: Container tailer ready: true, restart count 0 Aug 30 17:56:17.463: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 17:56:17.463: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ba644520-f8ef-4188-867a-7d7dd06fce0d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ba644520-f8ef-4188-867a-7d7dd06fce0d off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ba644520-f8ef-4188-867a-7d7dd06fce0d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:56:27.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9740" for this suite. Aug 30 17:56:45.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:56:45.833: INFO: namespace sched-pred-9740 deletion completed in 18.176500915s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:28.536 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:56:45.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 30 17:56:53.454: INFO: 0 pods remaining Aug 30 17:56:53.454: INFO: 0 pods has nil DeletionTimestamp Aug 30 17:56:53.455: INFO: STEP: Gathering metrics W0830 17:56:54.941154 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 30 17:56:54.941: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:56:54.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2791" for this suite. Aug 30 17:57:01.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:57:01.167: INFO: namespace gc-2791 deletion completed in 6.215781883s • [SLOW TEST:15.331 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:57:01.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-1df5983c-0e11-43db-b963-9e7297dfef97 STEP: Creating a pod to test consume configMaps Aug 30 17:57:01.339: INFO: Waiting up to 5m0s for pod "pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae" in namespace "configmap-9225" to be "success or failure" Aug 30 17:57:01.379: INFO: Pod "pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae": Phase="Pending", Reason="", readiness=false. Elapsed: 39.47075ms Aug 30 17:57:03.394: INFO: Pod "pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054493744s Aug 30 17:57:05.402: INFO: Pod "pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae": Phase="Running", Reason="", readiness=true. Elapsed: 4.062164083s Aug 30 17:57:07.409: INFO: Pod "pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069245653s STEP: Saw pod success Aug 30 17:57:07.409: INFO: Pod "pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae" satisfied condition "success or failure" Aug 30 17:57:07.414: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae container configmap-volume-test: STEP: delete the pod Aug 30 17:57:07.438: INFO: Waiting for pod pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae to disappear Aug 30 17:57:07.442: INFO: Pod pod-configmaps-6082aa19-5a9e-402e-9bbf-a03cef2462ae no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:57:07.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9225" for this suite. Aug 30 17:57:13.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:57:13.625: INFO: namespace configmap-9225 deletion completed in 6.174472222s • [SLOW TEST:12.458 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:57:13.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-426 I0830 17:57:13.740044 7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-426, replica count: 1 I0830 17:57:14.791807 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0830 17:57:15.792510 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0830 17:57:16.793385 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0830 17:57:17.793964 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 30 17:57:17.967: INFO: Created: latency-svc-b76j9 Aug 30 17:57:18.023: INFO: Got endpoints: latency-svc-b76j9 [126.853082ms] Aug 30 17:57:18.065: INFO: Created: latency-svc-p94dh Aug 30 17:57:18.099: INFO: Got endpoints: latency-svc-p94dh [74.754918ms] Aug 30 17:57:18.178: INFO: Created: latency-svc-slgwf Aug 30 17:57:18.181: INFO: Got endpoints: latency-svc-slgwf [157.387879ms] Aug 30 17:57:18.219: INFO: Created: latency-svc-9sws9 Aug 30 17:57:18.230: INFO: Got endpoints: latency-svc-9sws9 [206.42959ms] Aug 30 17:57:18.257: INFO: Created: latency-svc-wxcls Aug 30 17:57:18.270: INFO: Got endpoints: latency-svc-wxcls [245.912373ms] Aug 30 17:57:18.315: INFO: Created: latency-svc-wk7tm Aug 30 17:57:18.324: INFO: Got endpoints: latency-svc-wk7tm [299.76242ms] Aug 30 17:57:18.359: INFO: Created: latency-svc-clwgk Aug 30 17:57:18.377: INFO: Got endpoints: latency-svc-clwgk [353.610804ms] Aug 30 17:57:18.399: INFO: Created: latency-svc-tl7f6 Aug 30 17:57:18.465: INFO: Got endpoints: latency-svc-tl7f6 [441.748522ms] Aug 30 17:57:18.467: INFO: Created: latency-svc-h6zvm Aug 30 17:57:18.477: INFO: Got endpoints: latency-svc-h6zvm [452.988852ms] Aug 30 17:57:18.502: INFO: Created: latency-svc-xhrjc Aug 30 17:57:18.517: INFO: Got endpoints: latency-svc-xhrjc [492.989265ms] Aug 30 17:57:18.536: INFO: Created: latency-svc-25wrq Aug 30 17:57:18.553: INFO: Got endpoints: latency-svc-25wrq [528.543294ms] Aug 30 17:57:18.615: INFO: Created: latency-svc-wpmtx Aug 30 17:57:18.658: INFO: Created: latency-svc-9t4vg Aug 30 17:57:18.658: INFO: Got endpoints: latency-svc-wpmtx [633.978776ms] Aug 30 17:57:18.682: INFO: Got endpoints: latency-svc-9t4vg [658.454991ms] Aug 30 17:57:18.779: INFO: Created: latency-svc-klk97 Aug 30 17:57:18.791: INFO: Got endpoints: latency-svc-klk97 [766.092275ms] Aug 30 17:57:18.845: INFO: Created: latency-svc-lgfnt Aug 30 17:57:18.857: INFO: Got endpoints: latency-svc-lgfnt [832.506873ms] Aug 30 17:57:18.944: INFO: Created: latency-svc-krj27 Aug 30 17:57:18.947: INFO: Got endpoints: latency-svc-krj27 [922.244646ms] Aug 30 17:57:18.986: INFO: Created: latency-svc-d8dz6 Aug 30 17:57:18.995: INFO: Got endpoints: latency-svc-d8dz6 [895.991047ms] Aug 30 17:57:19.025: INFO: Created: latency-svc-xc52l Aug 30 17:57:19.106: INFO: Got endpoints: latency-svc-xc52l [924.760386ms] Aug 30 17:57:19.108: INFO: Created: latency-svc-cq645 Aug 30 17:57:19.122: INFO: Got endpoints: latency-svc-cq645 [891.609845ms] Aug 30 17:57:19.142: INFO: Created: latency-svc-l88n8 Aug 30 17:57:19.158: INFO: Got endpoints: latency-svc-l88n8 [888.38471ms] Aug 30 17:57:19.196: INFO: Created: latency-svc-wsrsq Aug 30 17:57:19.280: INFO: Got endpoints: latency-svc-wsrsq [956.124273ms] Aug 30 17:57:19.285: INFO: Created: latency-svc-ndlbq Aug 30 17:57:19.296: INFO: Got endpoints: latency-svc-ndlbq [918.829817ms] Aug 30 17:57:19.347: INFO: Created: latency-svc-zkxkm Aug 30 17:57:19.357: INFO: Got endpoints: latency-svc-zkxkm [891.606051ms] Aug 30 17:57:19.380: INFO: Created: latency-svc-m7982 Aug 30 17:57:19.430: INFO: Got endpoints: latency-svc-m7982 [952.479215ms] Aug 30 17:57:19.439: INFO: Created: latency-svc-rq2ml Aug 30 17:57:19.454: INFO: Got endpoints: latency-svc-rq2ml [936.477188ms] Aug 30 17:57:19.473: INFO: Created: latency-svc-qfsm8 Aug 30 17:57:19.484: INFO: Got endpoints: latency-svc-qfsm8 [930.846662ms] Aug 30 17:57:19.502: INFO: Created: latency-svc-cbxcd Aug 30 17:57:19.514: INFO: Got endpoints: latency-svc-cbxcd [855.327254ms] Aug 30 17:57:19.568: INFO: Created: latency-svc-f2cn6 Aug 30 17:57:19.570: INFO: Got endpoints: latency-svc-f2cn6 [888.013338ms] Aug 30 17:57:19.625: INFO: Created: latency-svc-q75wh Aug 30 17:57:19.641: INFO: Got endpoints: latency-svc-q75wh [849.871198ms] Aug 30 17:57:19.661: INFO: Created: latency-svc-gqgsq Aug 30 17:57:19.717: INFO: Got endpoints: latency-svc-gqgsq [859.892859ms] Aug 30 17:57:19.731: INFO: Created: latency-svc-gwwhz Aug 30 17:57:19.743: INFO: Got endpoints: latency-svc-gwwhz [796.090869ms] Aug 30 17:57:19.761: INFO: Created: latency-svc-kckww Aug 30 17:57:19.774: INFO: Got endpoints: latency-svc-kckww [779.054843ms] Aug 30 17:57:19.891: INFO: Created: latency-svc-xpv4p Aug 30 17:57:19.925: INFO: Created: latency-svc-phbxb Aug 30 17:57:19.925: INFO: Got endpoints: latency-svc-xpv4p [819.089361ms] Aug 30 17:57:19.943: INFO: Got endpoints: latency-svc-phbxb [820.319959ms] Aug 30 17:57:19.964: INFO: Created: latency-svc-jmmnm Aug 30 17:57:19.979: INFO: Got endpoints: latency-svc-jmmnm [820.064265ms] Aug 30 17:57:20.040: INFO: Created: latency-svc-dxh2j Aug 30 17:57:20.046: INFO: Got endpoints: latency-svc-dxh2j [766.114081ms] Aug 30 17:57:20.075: INFO: Created: latency-svc-k82fk Aug 30 17:57:20.105: INFO: Got endpoints: latency-svc-k82fk [807.95299ms] Aug 30 17:57:20.129: INFO: Created: latency-svc-4lnpj Aug 30 17:57:20.209: INFO: Got endpoints: latency-svc-4lnpj [851.144641ms] Aug 30 17:57:20.211: INFO: Created: latency-svc-4j7v6 Aug 30 17:57:20.219: INFO: Got endpoints: latency-svc-4j7v6 [788.986234ms] Aug 30 17:57:20.264: INFO: Created: latency-svc-p9jp4 Aug 30 17:57:20.274: INFO: Got endpoints: latency-svc-p9jp4 [819.97777ms] Aug 30 17:57:20.354: INFO: Created: latency-svc-frdl7 Aug 30 17:57:20.357: INFO: Got endpoints: latency-svc-frdl7 [872.067735ms] Aug 30 17:57:20.391: INFO: Created: latency-svc-78gkr Aug 30 17:57:20.406: INFO: Got endpoints: latency-svc-78gkr [892.286169ms] Aug 30 17:57:20.432: INFO: Created: latency-svc-24ttt Aug 30 17:57:20.442: INFO: Got endpoints: latency-svc-24ttt [871.596961ms] Aug 30 17:57:20.490: INFO: Created: latency-svc-5mcvg Aug 30 17:57:20.492: INFO: Got endpoints: latency-svc-5mcvg [851.45751ms] Aug 30 17:57:20.527: INFO: Created: latency-svc-47t8h Aug 30 17:57:20.539: INFO: Got endpoints: latency-svc-47t8h [821.873199ms] Aug 30 17:57:20.561: INFO: Created: latency-svc-fspt9 Aug 30 17:57:20.575: INFO: Got endpoints: latency-svc-fspt9 [831.591036ms] Aug 30 17:57:20.646: INFO: Created: latency-svc-295v9 Aug 30 17:57:20.649: INFO: Got endpoints: latency-svc-295v9 [874.899091ms] Aug 30 17:57:20.682: INFO: Created: latency-svc-tg7ln Aug 30 17:57:20.690: INFO: Got endpoints: latency-svc-tg7ln [763.805866ms] Aug 30 17:57:20.735: INFO: Created: latency-svc-fhcwx Aug 30 17:57:20.795: INFO: Got endpoints: latency-svc-fhcwx [852.36287ms] Aug 30 17:57:20.853: INFO: Created: latency-svc-589js Aug 30 17:57:20.934: INFO: Got endpoints: latency-svc-589js [954.968295ms] Aug 30 17:57:20.959: INFO: Created: latency-svc-5fm96 Aug 30 17:57:20.977: INFO: Got endpoints: latency-svc-5fm96 [930.225512ms] Aug 30 17:57:21.012: INFO: Created: latency-svc-9n4ld Aug 30 17:57:21.066: INFO: Got endpoints: latency-svc-9n4ld [961.148007ms] Aug 30 17:57:21.117: INFO: Created: latency-svc-4857l Aug 30 17:57:21.133: INFO: Got endpoints: latency-svc-4857l [924.479754ms] Aug 30 17:57:21.203: INFO: Created: latency-svc-qmzc7 Aug 30 17:57:21.228: INFO: Got endpoints: latency-svc-qmzc7 [1.008608252s] Aug 30 17:57:21.228: INFO: Created: latency-svc-zrw4d Aug 30 17:57:21.241: INFO: Got endpoints: latency-svc-zrw4d [967.223956ms] Aug 30 17:57:21.260: INFO: Created: latency-svc-zwzn4 Aug 30 17:57:21.271: INFO: Got endpoints: latency-svc-zwzn4 [914.189031ms] Aug 30 17:57:21.291: INFO: Created: latency-svc-9pfwd Aug 30 17:57:21.351: INFO: Got endpoints: latency-svc-9pfwd [944.599567ms] Aug 30 17:57:21.356: INFO: Created: latency-svc-pj4md Aug 30 17:57:21.389: INFO: Got endpoints: latency-svc-pj4md [946.837838ms] Aug 30 17:57:21.515: INFO: Created: latency-svc-qsbgk Aug 30 17:57:21.519: INFO: Got endpoints: latency-svc-qsbgk [1.02626815s] Aug 30 17:57:21.578: INFO: Created: latency-svc-68fl6 Aug 30 17:57:21.651: INFO: Got endpoints: latency-svc-68fl6 [1.111761815s] Aug 30 17:57:21.677: INFO: Created: latency-svc-4bcj4 Aug 30 17:57:21.693: INFO: Got endpoints: latency-svc-4bcj4 [1.118365497s] Aug 30 17:57:21.789: INFO: Created: latency-svc-dq4xq Aug 30 17:57:21.832: INFO: Created: latency-svc-9xkk2 Aug 30 17:57:21.833: INFO: Got endpoints: latency-svc-dq4xq [1.183178929s] Aug 30 17:57:21.849: INFO: Got endpoints: latency-svc-9xkk2 [1.15930814s] Aug 30 17:57:21.869: INFO: Created: latency-svc-sp4zv Aug 30 17:57:21.944: INFO: Got endpoints: latency-svc-sp4zv [1.149188603s] Aug 30 17:57:21.946: INFO: Created: latency-svc-fg787 Aug 30 17:57:21.973: INFO: Got endpoints: latency-svc-fg787 [1.038447669s] Aug 30 17:57:21.992: INFO: Created: latency-svc-twvd5 Aug 30 17:57:22.024: INFO: Got endpoints: latency-svc-twvd5 [1.047369634s] Aug 30 17:57:22.089: INFO: Created: latency-svc-k58jr Aug 30 17:57:22.121: INFO: Got endpoints: latency-svc-k58jr [1.054718524s] Aug 30 17:57:22.121: INFO: Created: latency-svc-gd8pj Aug 30 17:57:22.138: INFO: Got endpoints: latency-svc-gd8pj [1.004153561s] Aug 30 17:57:22.171: INFO: Created: latency-svc-bg4pq Aug 30 17:57:22.250: INFO: Got endpoints: latency-svc-bg4pq [1.021757096s] Aug 30 17:57:22.277: INFO: Created: latency-svc-6sx8c Aug 30 17:57:22.307: INFO: Got endpoints: latency-svc-6sx8c [1.065090027s] Aug 30 17:57:22.337: INFO: Created: latency-svc-nw5r8 Aug 30 17:57:22.406: INFO: Got endpoints: latency-svc-nw5r8 [1.134514425s] Aug 30 17:57:22.421: INFO: Created: latency-svc-qgt6g Aug 30 17:57:22.463: INFO: Got endpoints: latency-svc-qgt6g [1.11124582s] Aug 30 17:57:22.493: INFO: Created: latency-svc-rhllw Aug 30 17:57:22.555: INFO: Got endpoints: latency-svc-rhllw [1.165720811s] Aug 30 17:57:22.567: INFO: Created: latency-svc-vnsfs Aug 30 17:57:22.583: INFO: Got endpoints: latency-svc-vnsfs [1.064211728s] Aug 30 17:57:22.604: INFO: Created: latency-svc-hbbvv Aug 30 17:57:22.620: INFO: Got endpoints: latency-svc-hbbvv [968.173517ms] Aug 30 17:57:22.640: INFO: Created: latency-svc-tlxqs Aug 30 17:57:22.711: INFO: Got endpoints: latency-svc-tlxqs [1.017747773s] Aug 30 17:57:22.714: INFO: Created: latency-svc-9hl9q Aug 30 17:57:22.736: INFO: Got endpoints: latency-svc-9hl9q [903.224219ms] Aug 30 17:57:22.757: INFO: Created: latency-svc-gkn4b Aug 30 17:57:22.770: INFO: Got endpoints: latency-svc-gkn4b [920.986021ms] Aug 30 17:57:22.809: INFO: Created: latency-svc-4bp4c Aug 30 17:57:22.891: INFO: Got endpoints: latency-svc-4bp4c [946.804756ms] Aug 30 17:57:22.895: INFO: Created: latency-svc-k8mlp Aug 30 17:57:22.904: INFO: Got endpoints: latency-svc-k8mlp [930.84994ms] Aug 30 17:57:22.955: INFO: Created: latency-svc-8vgtf Aug 30 17:57:22.969: INFO: Got endpoints: latency-svc-8vgtf [944.758031ms] Aug 30 17:57:23.104: INFO: Created: latency-svc-wnsll Aug 30 17:57:23.104: INFO: Got endpoints: latency-svc-wnsll [983.218801ms] Aug 30 17:57:23.164: INFO: Created: latency-svc-zncl9 Aug 30 17:57:23.180: INFO: Got endpoints: latency-svc-zncl9 [1.041729504s] Aug 30 17:57:23.201: INFO: Created: latency-svc-fnc57 Aug 30 17:57:23.268: INFO: Got endpoints: latency-svc-fnc57 [1.018682533s] Aug 30 17:57:23.279: INFO: Created: latency-svc-l28pv Aug 30 17:57:23.294: INFO: Got endpoints: latency-svc-l28pv [986.688538ms] Aug 30 17:57:23.294: INFO: Created: latency-svc-cfntb Aug 30 17:57:23.306: INFO: Got endpoints: latency-svc-cfntb [900.048418ms] Aug 30 17:57:23.329: INFO: Created: latency-svc-vskct Aug 30 17:57:23.342: INFO: Got endpoints: latency-svc-vskct [879.585445ms] Aug 30 17:57:23.365: INFO: Created: latency-svc-f8k9l Aug 30 17:57:23.423: INFO: Got endpoints: latency-svc-f8k9l [867.524247ms] Aug 30 17:57:23.427: INFO: Created: latency-svc-fsgf9 Aug 30 17:57:23.434: INFO: Got endpoints: latency-svc-fsgf9 [850.192005ms] Aug 30 17:57:23.457: INFO: Created: latency-svc-gqf98 Aug 30 17:57:23.469: INFO: Got endpoints: latency-svc-gqf98 [849.417896ms] Aug 30 17:57:23.491: INFO: Created: latency-svc-6lc4r Aug 30 17:57:23.506: INFO: Got endpoints: latency-svc-6lc4r [794.358533ms] Aug 30 17:57:23.568: INFO: Created: latency-svc-zl2qf Aug 30 17:57:23.570: INFO: Got endpoints: latency-svc-zl2qf [833.750229ms] Aug 30 17:57:23.601: INFO: Created: latency-svc-k9nxj Aug 30 17:57:23.609: INFO: Got endpoints: latency-svc-k9nxj [838.474905ms] Aug 30 17:57:23.633: INFO: Created: latency-svc-ngmqz Aug 30 17:57:23.645: INFO: Got endpoints: latency-svc-ngmqz [752.985247ms] Aug 30 17:57:23.666: INFO: Created: latency-svc-68m7h Aug 30 17:57:23.729: INFO: Got endpoints: latency-svc-68m7h [825.40393ms] Aug 30 17:57:23.732: INFO: Created: latency-svc-sxhfc Aug 30 17:57:23.764: INFO: Got endpoints: latency-svc-sxhfc [794.627249ms] Aug 30 17:57:23.806: INFO: Created: latency-svc-9bnpd Aug 30 17:57:23.826: INFO: Got endpoints: latency-svc-9bnpd [721.078238ms] Aug 30 17:57:23.874: INFO: Created: latency-svc-mxmt4 Aug 30 17:57:23.875: INFO: Got endpoints: latency-svc-mxmt4 [694.854213ms] Aug 30 17:57:23.912: INFO: Created: latency-svc-glc2f Aug 30 17:57:23.928: INFO: Got endpoints: latency-svc-glc2f [658.917899ms] Aug 30 17:57:24.013: INFO: Created: latency-svc-jwrnr Aug 30 17:57:24.014: INFO: Got endpoints: latency-svc-jwrnr [720.140858ms] Aug 30 17:57:24.046: INFO: Created: latency-svc-fj8wr Aug 30 17:57:24.081: INFO: Got endpoints: latency-svc-fj8wr [775.205623ms] Aug 30 17:57:24.148: INFO: Created: latency-svc-98rvc Aug 30 17:57:24.162: INFO: Got endpoints: latency-svc-98rvc [819.426727ms] Aug 30 17:57:24.191: INFO: Created: latency-svc-6n876 Aug 30 17:57:24.205: INFO: Got endpoints: latency-svc-6n876 [781.073431ms] Aug 30 17:57:24.235: INFO: Created: latency-svc-kpjwt Aug 30 17:57:24.310: INFO: Got endpoints: latency-svc-kpjwt [876.00442ms] Aug 30 17:57:24.313: INFO: Created: latency-svc-w75cw Aug 30 17:57:24.319: INFO: Got endpoints: latency-svc-w75cw [849.62598ms] Aug 30 17:57:24.347: INFO: Created: latency-svc-4jf7r Aug 30 17:57:24.367: INFO: Got endpoints: latency-svc-4jf7r [861.155956ms] Aug 30 17:57:24.395: INFO: Created: latency-svc-gplq2 Aug 30 17:57:24.457: INFO: Got endpoints: latency-svc-gplq2 [886.706237ms] Aug 30 17:57:24.475: INFO: Created: latency-svc-blhlj Aug 30 17:57:24.494: INFO: Got endpoints: latency-svc-blhlj [885.250133ms] Aug 30 17:57:24.550: INFO: Created: latency-svc-wr425 Aug 30 17:57:24.597: INFO: Got endpoints: latency-svc-wr425 [952.197001ms] Aug 30 17:57:24.623: INFO: Created: latency-svc-gbdds Aug 30 17:57:24.639: INFO: Got endpoints: latency-svc-gbdds [908.951148ms] Aug 30 17:57:24.662: INFO: Created: latency-svc-bgfhh Aug 30 17:57:24.675: INFO: Got endpoints: latency-svc-bgfhh [910.065027ms] Aug 30 17:57:24.754: INFO: Created: latency-svc-rjkkj Aug 30 17:57:24.755: INFO: Got endpoints: latency-svc-rjkkj [929.249218ms] Aug 30 17:57:24.802: INFO: Created: latency-svc-mxdfk Aug 30 17:57:24.819: INFO: Got endpoints: latency-svc-mxdfk [944.224515ms] Aug 30 17:57:24.838: INFO: Created: latency-svc-szvpg Aug 30 17:57:24.849: INFO: Got endpoints: latency-svc-szvpg [921.756466ms] Aug 30 17:57:24.902: INFO: Created: latency-svc-9wf9p Aug 30 17:57:24.928: INFO: Got endpoints: latency-svc-9wf9p [913.667213ms] Aug 30 17:57:24.965: INFO: Created: latency-svc-52bcm Aug 30 17:57:24.976: INFO: Got endpoints: latency-svc-52bcm [894.302536ms] Aug 30 17:57:25.065: INFO: Created: latency-svc-lhx5q Aug 30 17:57:25.067: INFO: Got endpoints: latency-svc-lhx5q [905.066307ms] Aug 30 17:57:25.141: INFO: Created: latency-svc-snvsd Aug 30 17:57:25.192: INFO: Got endpoints: latency-svc-snvsd [987.437322ms] Aug 30 17:57:25.234: INFO: Created: latency-svc-ttcsj Aug 30 17:57:25.270: INFO: Got endpoints: latency-svc-ttcsj [960.031454ms] Aug 30 17:57:25.328: INFO: Created: latency-svc-vz978 Aug 30 17:57:25.343: INFO: Got endpoints: latency-svc-vz978 [1.023344129s] Aug 30 17:57:25.366: INFO: Created: latency-svc-5wk2p Aug 30 17:57:25.378: INFO: Got endpoints: latency-svc-5wk2p [1.010915611s] Aug 30 17:57:25.402: INFO: Created: latency-svc-jb5px Aug 30 17:57:25.415: INFO: Got endpoints: latency-svc-jb5px [957.476616ms] Aug 30 17:57:25.466: INFO: Created: latency-svc-m2l5v Aug 30 17:57:25.468: INFO: Got endpoints: latency-svc-m2l5v [973.085941ms] Aug 30 17:57:25.490: INFO: Created: latency-svc-gw8wp Aug 30 17:57:25.506: INFO: Got endpoints: latency-svc-gw8wp [908.599915ms] Aug 30 17:57:25.525: INFO: Created: latency-svc-ppkxc Aug 30 17:57:25.561: INFO: Got endpoints: latency-svc-ppkxc [921.675158ms] Aug 30 17:57:25.615: INFO: Created: latency-svc-jkxsk Aug 30 17:57:25.643: INFO: Created: latency-svc-r9r7v Aug 30 17:57:25.644: INFO: Got endpoints: latency-svc-jkxsk [968.632506ms] Aug 30 17:57:25.657: INFO: Got endpoints: latency-svc-r9r7v [901.487735ms] Aug 30 17:57:25.680: INFO: Created: latency-svc-fcmcd Aug 30 17:57:25.693: INFO: Got endpoints: latency-svc-fcmcd [873.932113ms] Aug 30 17:57:25.766: INFO: Created: latency-svc-bslm5 Aug 30 17:57:25.767: INFO: Got endpoints: latency-svc-bslm5 [916.993989ms] Aug 30 17:57:25.835: INFO: Created: latency-svc-5c9wb Aug 30 17:57:25.856: INFO: Got endpoints: latency-svc-5c9wb [928.035704ms] Aug 30 17:57:25.970: INFO: Created: latency-svc-hprjc Aug 30 17:57:25.971: INFO: Got endpoints: latency-svc-hprjc [995.310484ms] Aug 30 17:57:26.025: INFO: Created: latency-svc-sbmwv Aug 30 17:57:26.035: INFO: Got endpoints: latency-svc-sbmwv [967.944299ms] Aug 30 17:57:26.057: INFO: Created: latency-svc-qk7xk Aug 30 17:57:26.112: INFO: Got endpoints: latency-svc-qk7xk [919.531762ms] Aug 30 17:57:26.128: INFO: Created: latency-svc-cp4zb Aug 30 17:57:26.141: INFO: Got endpoints: latency-svc-cp4zb [870.653407ms] Aug 30 17:57:26.165: INFO: Created: latency-svc-nk6tq Aug 30 17:57:26.184: INFO: Got endpoints: latency-svc-nk6tq [840.903332ms] Aug 30 17:57:26.275: INFO: Created: latency-svc-ltt7w Aug 30 17:57:26.281: INFO: Got endpoints: latency-svc-ltt7w [902.103485ms] Aug 30 17:57:26.344: INFO: Created: latency-svc-2zgjq Aug 30 17:57:26.424: INFO: Got endpoints: latency-svc-2zgjq [1.008566464s] Aug 30 17:57:26.461: INFO: Created: latency-svc-27nbc Aug 30 17:57:26.490: INFO: Got endpoints: latency-svc-27nbc [1.022554613s] Aug 30 17:57:26.512: INFO: Created: latency-svc-w2m5b Aug 30 17:57:26.585: INFO: Got endpoints: latency-svc-w2m5b [1.079041433s] Aug 30 17:57:26.588: INFO: Created: latency-svc-snj2s Aug 30 17:57:26.592: INFO: Got endpoints: latency-svc-snj2s [1.03121269s] Aug 30 17:57:26.647: INFO: Created: latency-svc-w9gtf Aug 30 17:57:26.659: INFO: Got endpoints: latency-svc-w9gtf [1.014783645s] Aug 30 17:57:26.683: INFO: Created: latency-svc-npj5g Aug 30 17:57:26.771: INFO: Got endpoints: latency-svc-npj5g [1.114112523s] Aug 30 17:57:26.774: INFO: Created: latency-svc-2wtjh Aug 30 17:57:26.778: INFO: Got endpoints: latency-svc-2wtjh [1.084780583s] Aug 30 17:57:26.851: INFO: Created: latency-svc-ksg6d Aug 30 17:57:26.915: INFO: Got endpoints: latency-svc-ksg6d [1.148001722s] Aug 30 17:57:26.983: INFO: Created: latency-svc-t6wrz Aug 30 17:57:27.013: INFO: Got endpoints: latency-svc-t6wrz [1.157446217s] Aug 30 17:57:27.101: INFO: Created: latency-svc-wncrj Aug 30 17:57:27.128: INFO: Got endpoints: latency-svc-wncrj [1.156298848s] Aug 30 17:57:27.163: INFO: Created: latency-svc-cv8lh Aug 30 17:57:27.250: INFO: Got endpoints: latency-svc-cv8lh [1.214367303s] Aug 30 17:57:27.343: INFO: Created: latency-svc-7lwc9 Aug 30 17:57:27.394: INFO: Got endpoints: latency-svc-7lwc9 [1.281639763s] Aug 30 17:57:27.415: INFO: Created: latency-svc-r2j9s Aug 30 17:57:27.447: INFO: Got endpoints: latency-svc-r2j9s [1.305326179s] Aug 30 17:57:27.472: INFO: Created: latency-svc-pmjf2 Aug 30 17:57:27.489: INFO: Got endpoints: latency-svc-pmjf2 [1.304800817s] Aug 30 17:57:27.538: INFO: Created: latency-svc-6wmjq Aug 30 17:57:27.542: INFO: Got endpoints: latency-svc-6wmjq [1.261421193s] Aug 30 17:57:27.566: INFO: Created: latency-svc-h5x99 Aug 30 17:57:27.579: INFO: Got endpoints: latency-svc-h5x99 [1.155221728s] Aug 30 17:57:27.602: INFO: Created: latency-svc-pqrwk Aug 30 17:57:27.609: INFO: Got endpoints: latency-svc-pqrwk [1.118452426s] Aug 30 17:57:27.632: INFO: Created: latency-svc-s7jvj Aug 30 17:57:27.681: INFO: Got endpoints: latency-svc-s7jvj [1.095493721s] Aug 30 17:57:27.701: INFO: Created: latency-svc-7ds7s Aug 30 17:57:27.712: INFO: Got endpoints: latency-svc-7ds7s [1.119885909s] Aug 30 17:57:27.737: INFO: Created: latency-svc-6k296 Aug 30 17:57:27.748: INFO: Got endpoints: latency-svc-6k296 [1.088831208s] Aug 30 17:57:27.779: INFO: Created: latency-svc-6x5pn Aug 30 17:57:27.818: INFO: Got endpoints: latency-svc-6x5pn [1.046987669s] Aug 30 17:57:27.832: INFO: Created: latency-svc-gwfgh Aug 30 17:57:27.868: INFO: Got endpoints: latency-svc-gwfgh [1.089305715s] Aug 30 17:57:27.916: INFO: Created: latency-svc-kjfwx Aug 30 17:57:27.977: INFO: Got endpoints: latency-svc-kjfwx [1.062445116s] Aug 30 17:57:27.978: INFO: Created: latency-svc-7mpv7 Aug 30 17:57:27.978: INFO: Got endpoints: latency-svc-7mpv7 [964.799637ms] Aug 30 17:57:28.018: INFO: Created: latency-svc-fmlb4 Aug 30 17:57:28.031: INFO: Got endpoints: latency-svc-fmlb4 [902.725043ms] Aug 30 17:57:28.054: INFO: Created: latency-svc-njsg2 Aug 30 17:57:28.068: INFO: Got endpoints: latency-svc-njsg2 [817.590627ms] Aug 30 17:57:28.161: INFO: Created: latency-svc-r6rxs Aug 30 17:57:28.170: INFO: Got endpoints: latency-svc-r6rxs [776.238574ms] Aug 30 17:57:28.190: INFO: Created: latency-svc-9d5cd Aug 30 17:57:28.219: INFO: Got endpoints: latency-svc-9d5cd [772.5322ms] Aug 30 17:57:28.246: INFO: Created: latency-svc-td69h Aug 30 17:57:28.304: INFO: Got endpoints: latency-svc-td69h [815.00594ms] Aug 30 17:57:28.325: INFO: Created: latency-svc-qr2s4 Aug 30 17:57:28.339: INFO: Got endpoints: latency-svc-qr2s4 [796.242026ms] Aug 30 17:57:28.357: INFO: Created: latency-svc-5nd52 Aug 30 17:57:28.369: INFO: Got endpoints: latency-svc-5nd52 [789.636574ms] Aug 30 17:57:28.393: INFO: Created: latency-svc-jnttj Aug 30 17:57:28.489: INFO: Got endpoints: latency-svc-jnttj [880.177819ms] Aug 30 17:57:28.490: INFO: Created: latency-svc-vwr2w Aug 30 17:57:28.510: INFO: Got endpoints: latency-svc-vwr2w [828.651537ms] Aug 30 17:57:28.540: INFO: Created: latency-svc-tb4tz Aug 30 17:57:28.556: INFO: Got endpoints: latency-svc-tb4tz [844.242783ms] Aug 30 17:57:28.574: INFO: Created: latency-svc-4kkrb Aug 30 17:57:28.621: INFO: Got endpoints: latency-svc-4kkrb [873.042744ms] Aug 30 17:57:28.623: INFO: Created: latency-svc-6lskj Aug 30 17:57:28.634: INFO: Got endpoints: latency-svc-6lskj [815.920616ms] Aug 30 17:57:28.657: INFO: Created: latency-svc-7qkdn Aug 30 17:57:28.671: INFO: Got endpoints: latency-svc-7qkdn [802.922715ms] Aug 30 17:57:28.691: INFO: Created: latency-svc-qfjbm Aug 30 17:57:28.702: INFO: Got endpoints: latency-svc-qfjbm [724.107907ms] Aug 30 17:57:28.720: INFO: Created: latency-svc-r84r7 Aug 30 17:57:28.777: INFO: Got endpoints: latency-svc-r84r7 [798.595517ms] Aug 30 17:57:28.779: INFO: Created: latency-svc-w4kbl Aug 30 17:57:28.807: INFO: Got endpoints: latency-svc-w4kbl [775.281463ms] Aug 30 17:57:28.825: INFO: Created: latency-svc-wzhnk Aug 30 17:57:28.834: INFO: Got endpoints: latency-svc-wzhnk [765.475445ms] Aug 30 17:57:28.852: INFO: Created: latency-svc-69vgw Aug 30 17:57:28.864: INFO: Got endpoints: latency-svc-69vgw [693.481186ms] Aug 30 17:57:28.921: INFO: Created: latency-svc-fhfvf Aug 30 17:57:28.936: INFO: Got endpoints: latency-svc-fhfvf [716.608773ms] Aug 30 17:57:28.975: INFO: Created: latency-svc-pqpmb Aug 30 17:57:28.993: INFO: Got endpoints: latency-svc-pqpmb [688.695534ms] Aug 30 17:57:29.059: INFO: Created: latency-svc-8fbl6 Aug 30 17:57:29.063: INFO: Got endpoints: latency-svc-8fbl6 [723.823808ms] Aug 30 17:57:29.098: INFO: Created: latency-svc-ph62v Aug 30 17:57:29.111: INFO: Got endpoints: latency-svc-ph62v [742.127919ms] Aug 30 17:57:29.158: INFO: Created: latency-svc-2mmw8 Aug 30 17:57:29.262: INFO: Got endpoints: latency-svc-2mmw8 [772.333634ms] Aug 30 17:57:29.281: INFO: Created: latency-svc-kpqz4 Aug 30 17:57:29.298: INFO: Got endpoints: latency-svc-kpqz4 [787.987156ms] Aug 30 17:57:29.436: INFO: Created: latency-svc-2qdzx Aug 30 17:57:29.439: INFO: Got endpoints: latency-svc-2qdzx [882.286859ms] Aug 30 17:57:29.494: INFO: Created: latency-svc-jcwrv Aug 30 17:57:29.508: INFO: Got endpoints: latency-svc-jcwrv [887.015449ms] Aug 30 17:57:29.531: INFO: Created: latency-svc-4hml2 Aug 30 17:57:29.568: INFO: Got endpoints: latency-svc-4hml2 [933.866222ms] Aug 30 17:57:29.581: INFO: Created: latency-svc-vvzls Aug 30 17:57:29.605: INFO: Got endpoints: latency-svc-vvzls [933.832288ms] Aug 30 17:57:29.635: INFO: Created: latency-svc-k27rq Aug 30 17:57:29.649: INFO: Got endpoints: latency-svc-k27rq [947.200287ms] Aug 30 17:57:29.713: INFO: Created: latency-svc-5x74h Aug 30 17:57:29.746: INFO: Created: latency-svc-x5fbg Aug 30 17:57:29.746: INFO: Got endpoints: latency-svc-5x74h [968.789985ms] Aug 30 17:57:29.758: INFO: Got endpoints: latency-svc-x5fbg [951.131886ms] Aug 30 17:57:29.782: INFO: Created: latency-svc-wndfn Aug 30 17:57:29.878: INFO: Got endpoints: latency-svc-wndfn [1.044789337s] Aug 30 17:57:29.881: INFO: Created: latency-svc-prwcn Aug 30 17:57:29.891: INFO: Got endpoints: latency-svc-prwcn [1.026910481s] Aug 30 17:57:29.969: INFO: Created: latency-svc-p2fwr Aug 30 17:57:30.023: INFO: Got endpoints: latency-svc-p2fwr [1.086696624s] Aug 30 17:57:30.037: INFO: Created: latency-svc-fkgvj Aug 30 17:57:30.053: INFO: Got endpoints: latency-svc-fkgvj [1.06042263s] Aug 30 17:57:30.091: INFO: Created: latency-svc-ptx9h Aug 30 17:57:30.172: INFO: Got endpoints: latency-svc-ptx9h [1.109142806s] Aug 30 17:57:30.196: INFO: Created: latency-svc-p2mtp Aug 30 17:57:30.209: INFO: Got endpoints: latency-svc-p2mtp [1.097611805s] Aug 30 17:57:30.227: INFO: Created: latency-svc-brmw5 Aug 30 17:57:30.240: INFO: Got endpoints: latency-svc-brmw5 [977.811838ms] Aug 30 17:57:30.272: INFO: Created: latency-svc-hjqqb Aug 30 17:57:30.316: INFO: Got endpoints: latency-svc-hjqqb [1.017470091s] Aug 30 17:57:30.317: INFO: Created: latency-svc-dpd2f Aug 30 17:57:30.346: INFO: Got endpoints: latency-svc-dpd2f [906.830092ms] Aug 30 17:57:30.376: INFO: Created: latency-svc-q2lq8 Aug 30 17:57:30.384: INFO: Got endpoints: latency-svc-q2lq8 [875.118153ms] Aug 30 17:57:30.385: INFO: Latencies: [74.754918ms 157.387879ms 206.42959ms 245.912373ms 299.76242ms 353.610804ms 441.748522ms 452.988852ms 492.989265ms 528.543294ms 633.978776ms 658.454991ms 658.917899ms 688.695534ms 693.481186ms 694.854213ms 716.608773ms 720.140858ms 721.078238ms 723.823808ms 724.107907ms 742.127919ms 752.985247ms 763.805866ms 765.475445ms 766.092275ms 766.114081ms 772.333634ms 772.5322ms 775.205623ms 775.281463ms 776.238574ms 779.054843ms 781.073431ms 787.987156ms 788.986234ms 789.636574ms 794.358533ms 794.627249ms 796.090869ms 796.242026ms 798.595517ms 802.922715ms 807.95299ms 815.00594ms 815.920616ms 817.590627ms 819.089361ms 819.426727ms 819.97777ms 820.064265ms 820.319959ms 821.873199ms 825.40393ms 828.651537ms 831.591036ms 832.506873ms 833.750229ms 838.474905ms 840.903332ms 844.242783ms 849.417896ms 849.62598ms 849.871198ms 850.192005ms 851.144641ms 851.45751ms 852.36287ms 855.327254ms 859.892859ms 861.155956ms 867.524247ms 870.653407ms 871.596961ms 872.067735ms 873.042744ms 873.932113ms 874.899091ms 875.118153ms 876.00442ms 879.585445ms 880.177819ms 882.286859ms 885.250133ms 886.706237ms 887.015449ms 888.013338ms 888.38471ms 891.606051ms 891.609845ms 892.286169ms 894.302536ms 895.991047ms 900.048418ms 901.487735ms 902.103485ms 902.725043ms 903.224219ms 905.066307ms 906.830092ms 908.599915ms 908.951148ms 910.065027ms 913.667213ms 914.189031ms 916.993989ms 918.829817ms 919.531762ms 920.986021ms 921.675158ms 921.756466ms 922.244646ms 924.479754ms 924.760386ms 928.035704ms 929.249218ms 930.225512ms 930.846662ms 930.84994ms 933.832288ms 933.866222ms 936.477188ms 944.224515ms 944.599567ms 944.758031ms 946.804756ms 946.837838ms 947.200287ms 951.131886ms 952.197001ms 952.479215ms 954.968295ms 956.124273ms 957.476616ms 960.031454ms 961.148007ms 964.799637ms 967.223956ms 967.944299ms 968.173517ms 968.632506ms 968.789985ms 973.085941ms 977.811838ms 983.218801ms 986.688538ms 987.437322ms 995.310484ms 1.004153561s 1.008566464s 1.008608252s 1.010915611s 1.014783645s 1.017470091s 1.017747773s 1.018682533s 1.021757096s 1.022554613s 1.023344129s 1.02626815s 1.026910481s 1.03121269s 1.038447669s 1.041729504s 1.044789337s 1.046987669s 1.047369634s 1.054718524s 1.06042263s 1.062445116s 1.064211728s 1.065090027s 1.079041433s 1.084780583s 1.086696624s 1.088831208s 1.089305715s 1.095493721s 1.097611805s 1.109142806s 1.11124582s 1.111761815s 1.114112523s 1.118365497s 1.118452426s 1.119885909s 1.134514425s 1.148001722s 1.149188603s 1.155221728s 1.156298848s 1.157446217s 1.15930814s 1.165720811s 1.183178929s 1.214367303s 1.261421193s 1.281639763s 1.304800817s 1.305326179s] Aug 30 17:57:30.386: INFO: 50 %ile: 908.599915ms Aug 30 17:57:30.386: INFO: 90 %ile: 1.11124582s Aug 30 17:57:30.386: INFO: 99 %ile: 1.304800817s Aug 30 17:57:30.386: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:57:30.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-426" for this suite. Aug 30 17:57:56.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:57:56.542: INFO: namespace svc-latency-426 deletion completed in 26.148968639s • [SLOW TEST:42.916 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:57:56.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 30 17:57:56.651: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:58:07.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-377" for this suite. Aug 30 17:58:29.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:58:30.134: INFO: namespace init-container-377 deletion completed in 22.172345562s • [SLOW TEST:33.590 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:58:30.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Aug 30 17:58:34.791: INFO: Successfully updated pod "annotationupdate48f3054d-f6cf-4a0b-9c9c-d21581f941a3" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:58:36.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8113" for this suite. Aug 30 17:58:58.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:58:59.018: INFO: namespace projected-8113 deletion completed in 22.158650462s • [SLOW TEST:28.881 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:58:59.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 30 17:58:59.118: INFO: Waiting up to 5m0s for pod "downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce" in namespace "downward-api-7813" to be "success or failure" Aug 30 17:58:59.124: INFO: Pod "downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce": Phase="Pending", Reason="", readiness=false. Elapsed: 5.576708ms Aug 30 17:59:01.131: INFO: Pod "downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013334776s Aug 30 17:59:03.139: INFO: Pod "downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021210878s STEP: Saw pod success Aug 30 17:59:03.140: INFO: Pod "downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce" satisfied condition "success or failure" Aug 30 17:59:03.169: INFO: Trying to get logs from node iruya-worker pod downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce container dapi-container: STEP: delete the pod Aug 30 17:59:03.196: INFO: Waiting for pod downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce to disappear Aug 30 17:59:03.207: INFO: Pod downward-api-ff7c814f-e61e-4db7-b07b-567eea3943ce no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:59:03.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7813" for this suite. Aug 30 17:59:09.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 17:59:09.366: INFO: namespace downward-api-7813 deletion completed in 6.150058147s • [SLOW TEST:10.344 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 17:59:09.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6644 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 30 17:59:09.493: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 30 17:59:35.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.221:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6644 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 17:59:35.691: INFO: >>> kubeConfig: /root/.kube/config I0830 17:59:35.756297 7 log.go:172] (0x4001d40420) (0x4002df59a0) Create stream I0830 17:59:35.756525 7 log.go:172] (0x4001d40420) (0x4002df59a0) Stream added, broadcasting: 1 I0830 17:59:35.765803 7 log.go:172] (0x4001d40420) Reply frame received for 1 I0830 17:59:35.765966 7 log.go:172] (0x4001d40420) (0x4001e461e0) Create stream I0830 17:59:35.766033 7 log.go:172] (0x4001d40420) (0x4001e461e0) Stream added, broadcasting: 3 I0830 17:59:35.769013 7 log.go:172] (0x4001d40420) Reply frame received for 3 I0830 17:59:35.769204 7 log.go:172] (0x4001d40420) (0x400360aa00) Create stream I0830 17:59:35.769267 7 log.go:172] (0x4001d40420) (0x400360aa00) Stream added, broadcasting: 5 I0830 17:59:35.770708 7 log.go:172] (0x4001d40420) Reply frame received for 5 I0830 17:59:35.825003 7 log.go:172] (0x4001d40420) Data frame received for 3 I0830 17:59:35.825163 7 log.go:172] (0x4001e461e0) (3) Data frame handling I0830 17:59:35.825272 7 log.go:172] (0x4001d40420) Data frame received for 5 I0830 17:59:35.825395 7 log.go:172] (0x400360aa00) (5) Data frame handling I0830 17:59:35.825521 7 log.go:172] (0x4001e461e0) (3) Data frame sent I0830 17:59:35.825725 7 log.go:172] (0x4001d40420) Data frame received for 3 I0830 17:59:35.825839 7 log.go:172] (0x4001e461e0) (3) Data frame handling I0830 17:59:35.826474 7 log.go:172] (0x4001d40420) Data frame received for 1 I0830 17:59:35.826642 7 log.go:172] (0x4002df59a0) (1) Data frame handling I0830 17:59:35.826764 7 log.go:172] (0x4002df59a0) (1) Data frame sent I0830 17:59:35.826876 7 log.go:172] (0x4001d40420) (0x4002df59a0) Stream removed, broadcasting: 1 I0830 17:59:35.827043 7 log.go:172] (0x4001d40420) Go away received I0830 17:59:35.827330 7 log.go:172] (0x4001d40420) (0x4002df59a0) Stream removed, broadcasting: 1 I0830 17:59:35.827443 7 log.go:172] (0x4001d40420) (0x4001e461e0) Stream removed, broadcasting: 3 I0830 17:59:35.827519 7 log.go:172] (0x4001d40420) (0x400360aa00) Stream removed, broadcasting: 5 Aug 30 17:59:35.827: INFO: Found all expected endpoints: [netserver-0] Aug 30 17:59:35.832: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.19:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6644 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 17:59:35.832: INFO: >>> kubeConfig: /root/.kube/config I0830 17:59:35.893792 7 log.go:172] (0x4001274bb0) (0x4002463040) Create stream I0830 17:59:35.893967 7 log.go:172] (0x4001274bb0) (0x4002463040) Stream added, broadcasting: 1 I0830 17:59:35.897989 7 log.go:172] (0x4001274bb0) Reply frame received for 1 I0830 17:59:35.898200 7 log.go:172] (0x4001274bb0) (0x4002df5a40) Create stream I0830 17:59:35.898301 7 log.go:172] (0x4001274bb0) (0x4002df5a40) Stream added, broadcasting: 3 I0830 17:59:35.900169 7 log.go:172] (0x4001274bb0) Reply frame received for 3 I0830 17:59:35.900349 7 log.go:172] (0x4001274bb0) (0x40024630e0) Create stream I0830 17:59:35.900449 7 log.go:172] (0x4001274bb0) (0x40024630e0) Stream added, broadcasting: 5 I0830 17:59:35.902336 7 log.go:172] (0x4001274bb0) Reply frame received for 5 I0830 17:59:35.979374 7 log.go:172] (0x4001274bb0) Data frame received for 5 I0830 17:59:35.979540 7 log.go:172] (0x40024630e0) (5) Data frame handling I0830 17:59:35.979644 7 log.go:172] (0x4001274bb0) Data frame received for 3 I0830 17:59:35.979726 7 log.go:172] (0x4002df5a40) (3) Data frame handling I0830 17:59:35.979804 7 log.go:172] (0x4002df5a40) (3) Data frame sent I0830 17:59:35.979862 7 log.go:172] (0x4001274bb0) Data frame received for 3 I0830 17:59:35.979915 7 log.go:172] (0x4002df5a40) (3) Data frame handling I0830 17:59:35.982202 7 log.go:172] (0x4001274bb0) Data frame received for 1 I0830 17:59:35.982296 7 log.go:172] (0x4002463040) (1) Data frame handling I0830 17:59:35.982419 7 log.go:172] (0x4002463040) (1) Data frame sent I0830 17:59:35.982542 7 log.go:172] (0x4001274bb0) (0x4002463040) Stream removed, broadcasting: 1 I0830 17:59:35.982659 7 log.go:172] (0x4001274bb0) Go away received I0830 17:59:35.983029 7 log.go:172] (0x4001274bb0) (0x4002463040) Stream removed, broadcasting: 1 I0830 17:59:35.983213 7 log.go:172] (0x4001274bb0) (0x4002df5a40) Stream removed, broadcasting: 3 I0830 17:59:35.983336 7 log.go:172] (0x4001274bb0) (0x40024630e0) Stream removed, broadcasting: 5 Aug 30 17:59:35.983: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 17:59:35.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6644" for this suite. Aug 30 18:00:00.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:00:00.160: INFO: namespace pod-network-test-6644 deletion completed in 24.167182444s • [SLOW TEST:50.793 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:00:00.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-43502eaf-6607-482c-bd9c-b567a9add32f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-43502eaf-6607-482c-bd9c-b567a9add32f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:00:06.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-301" for this suite. Aug 30 18:00:18.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:00:18.498: INFO: namespace configmap-301 deletion completed in 12.153000698s • [SLOW TEST:18.337 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:00:18.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 30 18:00:18.579: INFO: PodSpec: initContainers in spec.initContainers Aug 30 18:01:07.605: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1478b647-6caa-4de2-b2d1-9e4ce149352a", GenerateName:"", Namespace:"init-container-3944", SelfLink:"/api/v1/namespaces/init-container-3944/pods/pod-init-1478b647-6caa-4de2-b2d1-9e4ce149352a", UID:"f89a2f0f-621b-451e-8795-d031ec61ea60", ResourceVersion:"4075154", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734407218, loc:(*time.Location)(0x792fa60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"577998401"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tclxk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4003870180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tclxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tclxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tclxk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40032f8288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a80060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40032f8310)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40032f8330)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x40032f8338), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x40032f833c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407218, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407218, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407218, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407218, loc:(*time.Location)(0x792fa60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.9", PodIP:"10.244.1.222", StartTime:(*v1.Time)(0x40016ec340), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x40016ec520), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40001b30a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6a9ed684712806e999073a3e68fadd2fc3b2cf46c6087ef800918c51e0d54cd4"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40016ec5a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40016ec440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:01:07.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3944" for this suite. Aug 30 18:01:29.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:01:29.818: INFO: namespace init-container-3944 deletion completed in 22.193818163s • [SLOW TEST:71.319 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:01:29.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 30 18:01:35.930: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-78290dbd-e658-401d-bc3b-e41ce05bc207,GenerateName:,Namespace:events-1946,SelfLink:/api/v1/namespaces/events-1946/pods/send-events-78290dbd-e658-401d-bc3b-e41ce05bc207,UID:38c6ada0-2d76-4a95-a9f2-e8f763ccf0bc,ResourceVersion:4075233,Generation:0,CreationTimestamp:2020-08-30 18:01:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 878108161,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d9ggz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d9ggz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-d9ggz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4002fe3e30} {node.kubernetes.io/unreachable Exists NoExecute 0x4002fe3e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:01:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:01:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:01:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:01:29 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.27,StartTime:2020-08-30 18:01:29 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-30 18:01:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://a19a5453c803e791b39911dcdadb997d409591b238692526c9b75c91743837bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 30 18:01:37.943: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 30 18:01:39.953: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:01:39.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1946" for this suite. Aug 30 18:02:18.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:02:18.140: INFO: namespace events-1946 deletion completed in 38.158307473s • [SLOW TEST:48.317 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:02:18.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Aug 30 18:02:18.226: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 30 18:02:18.284: INFO: Waiting for terminating namespaces to be deleted... Aug 30 18:02:18.289: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Aug 30 18:02:18.321: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.321: INFO: Container app ready: true, restart count 0 Aug 30 18:02:18.321: INFO: daemon-set-6z8rp from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.321: INFO: Container app ready: true, restart count 0 Aug 30 18:02:18.321: INFO: cassandra-76f5c4d86c-hd7ww from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.322: INFO: Container cassandra ready: true, restart count 0 Aug 30 18:02:18.322: INFO: homer-74dd4556d9-q6gxg from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.322: INFO: Container homer ready: true, restart count 0 Aug 30 18:02:18.322: INFO: sprout-686cc64cfb-6vw8x from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (2 container statuses recorded) Aug 30 18:02:18.322: INFO: Container sprout ready: true, restart count 0 Aug 30 18:02:18.322: INFO: Container tailer ready: true, restart count 0 Aug 30 18:02:18.322: INFO: homestead-prov-756c8bff5d-zvxsr from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.322: INFO: Container homestead-prov ready: true, restart count 0 Aug 30 18:02:18.322: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.322: INFO: Container kube-proxy ready: true, restart count 0 Aug 30 18:02:18.322: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.322: INFO: Container app ready: true, restart count 0 Aug 30 18:02:18.322: INFO: ellis-57b84b6dd7-rt8xk from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.322: INFO: Container ellis ready: true, restart count 0 Aug 30 18:02:18.322: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.322: INFO: Container kindnet-cni ready: true, restart count 0 Aug 30 18:02:18.322: INFO: astaire-5ddcdd6b7f-9dgqk from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 18:02:18.322: INFO: Container astaire ready: true, restart count 0 Aug 30 18:02:18.322: INFO: Container tailer ready: true, restart count 0 Aug 30 18:02:18.322: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Aug 30 18:02:18.358: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.358: INFO: Container kube-proxy ready: true, restart count 0 Aug 30 18:02:18.358: INFO: etcd-5cbf55c8c-bmvbb from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.358: INFO: Container etcd ready: true, restart count 0 Aug 30 18:02:18.358: INFO: live-test from ims-c5hpb started at 2020-08-30 10:18:13 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.358: INFO: Container live-test ready: false, restart count 0 Aug 30 18:02:18.358: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.358: INFO: Container app ready: true, restart count 0 Aug 30 18:02:18.358: INFO: homestead-57586d6cdc-zf5g4 from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 18:02:18.358: INFO: Container homestead ready: true, restart count 0 Aug 30 18:02:18.358: INFO: Container tailer ready: true, restart count 0 Aug 30 18:02:18.358: INFO: ralf-57c4654cb8-xhclj from ims-c5hpb started at 2020-08-30 10:12:39 +0000 UTC (2 container statuses recorded) Aug 30 18:02:18.358: INFO: Container ralf ready: true, restart count 0 Aug 30 18:02:18.358: INFO: Container tailer ready: true, restart count 0 Aug 30 18:02:18.358: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.358: INFO: Container kindnet-cni ready: true, restart count 0 Aug 30 18:02:18.358: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.358: INFO: Container app ready: true, restart count 0 Aug 30 18:02:18.358: INFO: daemon-set-fzgmk from daemonsets-4068 started at 2020-08-25 22:38:22 +0000 UTC (1 container statuses recorded) Aug 30 18:02:18.358: INFO: Container app ready: true, restart count 0 Aug 30 18:02:18.358: INFO: bono-5cdb7bfcdd-8fpzx from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 18:02:18.359: INFO: Container bono ready: true, restart count 0 Aug 30 18:02:18.359: INFO: Container tailer ready: true, restart count 0 Aug 30 18:02:18.359: INFO: chronos-687b9884c5-m92fc from ims-c5hpb started at 2020-08-30 10:12:38 +0000 UTC (2 container statuses recorded) Aug 30 18:02:18.359: INFO: Container chronos ready: true, restart count 0 Aug 30 18:02:18.359: INFO: Container tailer ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16301db6bce5328c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:02:19.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3657" for this suite. Aug 30 18:02:25.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:02:25.601: INFO: namespace sched-pred-3657 deletion completed in 6.172106003s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.459 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:02:25.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 30 18:02:25.673: INFO: Waiting up to 5m0s for pod "pod-25763dc9-3018-420a-b65b-1e535173ad8f" in namespace "emptydir-5185" to be "success or failure" Aug 30 18:02:25.698: INFO: Pod "pod-25763dc9-3018-420a-b65b-1e535173ad8f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.750007ms Aug 30 18:02:27.705: INFO: Pod "pod-25763dc9-3018-420a-b65b-1e535173ad8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031804363s Aug 30 18:02:29.712: INFO: Pod "pod-25763dc9-3018-420a-b65b-1e535173ad8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038533018s Aug 30 18:02:31.719: INFO: Pod "pod-25763dc9-3018-420a-b65b-1e535173ad8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045919477s STEP: Saw pod success Aug 30 18:02:31.719: INFO: Pod "pod-25763dc9-3018-420a-b65b-1e535173ad8f" satisfied condition "success or failure" Aug 30 18:02:31.723: INFO: Trying to get logs from node iruya-worker pod pod-25763dc9-3018-420a-b65b-1e535173ad8f container test-container: STEP: delete the pod Aug 30 18:02:31.758: INFO: Waiting for pod pod-25763dc9-3018-420a-b65b-1e535173ad8f to disappear Aug 30 18:02:31.773: INFO: Pod pod-25763dc9-3018-420a-b65b-1e535173ad8f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:02:31.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5185" for this suite. Aug 30 18:02:37.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:02:37.990: INFO: namespace emptydir-5185 deletion completed in 6.210449926s • [SLOW TEST:12.387 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:02:37.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:03:12.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8294" for this suite. Aug 30 18:03:20.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:03:20.295: INFO: namespace container-runtime-8294 deletion completed in 8.175911582s • [SLOW TEST:42.302 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:03:20.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 30 18:03:20.406: INFO: Waiting up to 5m0s for pod "pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc" in namespace "emptydir-612" to be "success or failure" Aug 30 18:03:20.441: INFO: Pod "pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.40217ms Aug 30 18:03:22.610: INFO: Pod "pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203688955s Aug 30 18:03:24.618: INFO: Pod "pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc": Phase="Running", Reason="", readiness=true. Elapsed: 4.211614482s Aug 30 18:03:26.626: INFO: Pod "pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.219277746s STEP: Saw pod success Aug 30 18:03:26.626: INFO: Pod "pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc" satisfied condition "success or failure" Aug 30 18:03:26.657: INFO: Trying to get logs from node iruya-worker pod pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc container test-container: STEP: delete the pod Aug 30 18:03:26.694: INFO: Waiting for pod pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc to disappear Aug 30 18:03:26.717: INFO: Pod pod-4f96f986-49ac-4ea3-b9bf-bcca2c0902fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:03:26.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-612" for this suite. Aug 30 18:03:32.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:03:32.907: INFO: namespace emptydir-612 deletion completed in 6.180453936s • [SLOW TEST:12.610 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:03:32.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jmxc STEP: Creating a pod to test atomic-volume-subpath Aug 30 18:03:33.062: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jmxc" in namespace "subpath-9223" to be "success or failure" Aug 30 18:03:33.117: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Pending", Reason="", readiness=false. Elapsed: 55.433741ms Aug 30 18:03:35.124: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062224286s Aug 30 18:03:37.131: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 4.068670479s Aug 30 18:03:39.138: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 6.076480095s Aug 30 18:03:41.146: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 8.084511883s Aug 30 18:03:43.154: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 10.09211505s Aug 30 18:03:45.162: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 12.099689356s Aug 30 18:03:47.170: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 14.10809512s Aug 30 18:03:49.177: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 16.114921822s Aug 30 18:03:51.184: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 18.121725544s Aug 30 18:03:53.190: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 20.128479s Aug 30 18:03:55.197: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 22.135461575s Aug 30 18:03:57.305: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Running", Reason="", readiness=true. Elapsed: 24.242997438s Aug 30 18:03:59.312: INFO: Pod "pod-subpath-test-configmap-jmxc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.250106157s STEP: Saw pod success Aug 30 18:03:59.312: INFO: Pod "pod-subpath-test-configmap-jmxc" satisfied condition "success or failure" Aug 30 18:03:59.320: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-jmxc container test-container-subpath-configmap-jmxc: STEP: delete the pod Aug 30 18:03:59.371: INFO: Waiting for pod pod-subpath-test-configmap-jmxc to disappear Aug 30 18:03:59.378: INFO: Pod pod-subpath-test-configmap-jmxc no longer exists STEP: Deleting pod pod-subpath-test-configmap-jmxc Aug 30 18:03:59.378: INFO: Deleting pod "pod-subpath-test-configmap-jmxc" in namespace "subpath-9223" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:03:59.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9223" for this suite. Aug 30 18:04:05.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:04:05.560: INFO: namespace subpath-9223 deletion completed in 6.170049307s • [SLOW TEST:32.652 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:04:05.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 18:04:05.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6" in namespace "downward-api-3123" to be "success or failure" Aug 30 18:04:05.667: INFO: Pod "downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.212779ms Aug 30 18:04:07.672: INFO: Pod "downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014223436s Aug 30 18:04:09.679: INFO: Pod "downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6": Phase="Running", Reason="", readiness=true. Elapsed: 4.020892868s Aug 30 18:04:11.685: INFO: Pod "downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027711714s STEP: Saw pod success Aug 30 18:04:11.686: INFO: Pod "downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6" satisfied condition "success or failure" Aug 30 18:04:11.690: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6 container client-container: STEP: delete the pod Aug 30 18:04:11.749: INFO: Waiting for pod downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6 to disappear Aug 30 18:04:11.753: INFO: Pod downwardapi-volume-bc9719e6-de8b-47b3-9613-db345a9b09c6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:04:11.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3123" for this suite. Aug 30 18:04:17.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:04:17.900: INFO: namespace downward-api-3123 deletion completed in 6.140272325s • [SLOW TEST:12.335 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:04:17.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 30 18:04:22.196: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:04:22.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5438" for this suite. Aug 30 18:04:28.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:04:28.479: INFO: namespace container-runtime-5438 deletion completed in 6.149164064s • [SLOW TEST:10.572 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:04:28.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 18:04:28.559: INFO: Creating deployment "test-recreate-deployment" Aug 30 18:04:28.569: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 30 18:04:28.625: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 30 18:04:30.636: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 30 18:04:30.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407468, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407468, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407468, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734407468, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 30 18:04:32.647: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 30 18:04:32.659: INFO: Updating deployment test-recreate-deployment Aug 30 18:04:32.659: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 30 18:04:33.653: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9677,SelfLink:/apis/apps/v1/namespaces/deployment-9677/deployments/test-recreate-deployment,UID:21073a2e-f84b-47aa-84ff-dc645169db51,ResourceVersion:4075840,Generation:2,CreationTimestamp:2020-08-30 18:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-30 18:04:33 +0000 UTC 2020-08-30 18:04:33 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-30 18:04:33 +0000 UTC 2020-08-30 18:04:28 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 30 18:04:33.686: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9677,SelfLink:/apis/apps/v1/namespaces/deployment-9677/replicasets/test-recreate-deployment-5c8c9cc69d,UID:a2361d58-f49d-4b06-90a3-068b89e9b3af,ResourceVersion:4075838,Generation:1,CreationTimestamp:2020-08-30 18:04:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 21073a2e-f84b-47aa-84ff-dc645169db51 0x4002502157 0x4002502158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 30 18:04:33.687: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 30 18:04:33.688: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9677,SelfLink:/apis/apps/v1/namespaces/deployment-9677/replicasets/test-recreate-deployment-6df85df6b9,UID:f430f89e-22d9-4dcc-8697-8c9c4df58011,ResourceVersion:4075830,Generation:2,CreationTimestamp:2020-08-30 18:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 21073a2e-f84b-47aa-84ff-dc645169db51 0x4002502297 0x4002502298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 30 18:04:33.805: INFO: Pod "test-recreate-deployment-5c8c9cc69d-gsrqw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-gsrqw,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9677,SelfLink:/api/v1/namespaces/deployment-9677/pods/test-recreate-deployment-5c8c9cc69d-gsrqw,UID:b33da898-a598-4afa-ac1c-8fae7255e9eb,ResourceVersion:4075843,Generation:0,CreationTimestamp:2020-08-30 18:04:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d a2361d58-f49d-4b06-90a3-068b89e9b3af 0x40033c5307 0x40033c5308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sjkqt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sjkqt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sjkqt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40033c5380} {node.kubernetes.io/unreachable Exists NoExecute 0x40033c53a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:04:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:04:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:04:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:04:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-30 18:04:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:04:33.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9677" for this suite. Aug 30 18:04:39.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:04:40.078: INFO: namespace deployment-9677 deletion completed in 6.262792237s • [SLOW TEST:11.597 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:04:40.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6904 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 30 18:04:40.353: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 30 18:05:08.682: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.234 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6904 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:05:08.682: INFO: >>> kubeConfig: /root/.kube/config I0830 18:05:08.742608 7 log.go:172] (0x400128c790) (0x40019f72c0) Create stream I0830 18:05:08.742774 7 log.go:172] (0x400128c790) (0x40019f72c0) Stream added, broadcasting: 1 I0830 18:05:08.746069 7 log.go:172] (0x400128c790) Reply frame received for 1 I0830 18:05:08.746213 7 log.go:172] (0x400128c790) (0x40011d0780) Create stream I0830 18:05:08.746282 7 log.go:172] (0x400128c790) (0x40011d0780) Stream added, broadcasting: 3 I0830 18:05:08.747622 7 log.go:172] (0x400128c790) Reply frame received for 3 I0830 18:05:08.747771 7 log.go:172] (0x400128c790) (0x4000a9e140) Create stream I0830 18:05:08.747860 7 log.go:172] (0x400128c790) (0x4000a9e140) Stream added, broadcasting: 5 I0830 18:05:08.749126 7 log.go:172] (0x400128c790) Reply frame received for 5 I0830 18:05:09.835160 7 log.go:172] (0x400128c790) Data frame received for 3 I0830 18:05:09.835464 7 log.go:172] (0x400128c790) Data frame received for 5 I0830 18:05:09.835619 7 log.go:172] (0x4000a9e140) (5) Data frame handling I0830 18:05:09.835792 7 log.go:172] (0x40011d0780) (3) Data frame handling I0830 18:05:09.835987 7 log.go:172] (0x40011d0780) (3) Data frame sent I0830 18:05:09.836114 7 log.go:172] (0x400128c790) Data frame received for 3 I0830 18:05:09.836204 7 log.go:172] (0x40011d0780) (3) Data frame handling I0830 18:05:09.837241 7 log.go:172] (0x400128c790) Data frame received for 1 I0830 18:05:09.837335 7 log.go:172] (0x40019f72c0) (1) Data frame handling I0830 18:05:09.837444 7 log.go:172] (0x40019f72c0) (1) Data frame sent I0830 18:05:09.837559 7 log.go:172] (0x400128c790) (0x40019f72c0) Stream removed, broadcasting: 1 I0830 18:05:09.838006 7 log.go:172] (0x400128c790) (0x40019f72c0) Stream removed, broadcasting: 1 I0830 18:05:09.838084 7 log.go:172] (0x400128c790) (0x40011d0780) Stream removed, broadcasting: 3 I0830 18:05:09.838169 7 log.go:172] (0x400128c790) (0x4000a9e140) Stream removed, broadcasting: 5 Aug 30 18:05:09.838: INFO: Found all expected endpoints: [netserver-0] I0830 18:05:09.839440 7 log.go:172] (0x400128c790) Go away received Aug 30 18:05:09.843: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.32 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6904 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:05:09.843: INFO: >>> kubeConfig: /root/.kube/config I0830 18:05:09.899921 7 log.go:172] (0x40012049a0) (0x4000a9e8c0) Create stream I0830 18:05:09.900054 7 log.go:172] (0x40012049a0) (0x4000a9e8c0) Stream added, broadcasting: 1 I0830 18:05:09.903449 7 log.go:172] (0x40012049a0) Reply frame received for 1 I0830 18:05:09.903584 7 log.go:172] (0x40012049a0) (0x4000a9ec80) Create stream I0830 18:05:09.903650 7 log.go:172] (0x40012049a0) (0x4000a9ec80) Stream added, broadcasting: 3 I0830 18:05:09.905092 7 log.go:172] (0x40012049a0) Reply frame received for 3 I0830 18:05:09.905218 7 log.go:172] (0x40012049a0) (0x4000a9ed20) Create stream I0830 18:05:09.905287 7 log.go:172] (0x40012049a0) (0x4000a9ed20) Stream added, broadcasting: 5 I0830 18:05:09.906503 7 log.go:172] (0x40012049a0) Reply frame received for 5 I0830 18:05:10.966514 7 log.go:172] (0x40012049a0) Data frame received for 5 I0830 18:05:10.966669 7 log.go:172] (0x4000a9ed20) (5) Data frame handling I0830 18:05:10.966813 7 log.go:172] (0x40012049a0) Data frame received for 3 I0830 18:05:10.966975 7 log.go:172] (0x4000a9ec80) (3) Data frame handling I0830 18:05:10.967098 7 log.go:172] (0x4000a9ec80) (3) Data frame sent I0830 18:05:10.967189 7 log.go:172] (0x40012049a0) Data frame received for 3 I0830 18:05:10.967272 7 log.go:172] (0x4000a9ec80) (3) Data frame handling I0830 18:05:10.969193 7 log.go:172] (0x40012049a0) Data frame received for 1 I0830 18:05:10.969304 7 log.go:172] (0x4000a9e8c0) (1) Data frame handling I0830 18:05:10.969415 7 log.go:172] (0x4000a9e8c0) (1) Data frame sent I0830 18:05:10.969531 7 log.go:172] (0x40012049a0) (0x4000a9e8c0) Stream removed, broadcasting: 1 I0830 18:05:10.969669 7 log.go:172] (0x40012049a0) Go away received I0830 18:05:10.969905 7 log.go:172] (0x40012049a0) (0x4000a9e8c0) Stream removed, broadcasting: 1 I0830 18:05:10.970018 7 log.go:172] (0x40012049a0) (0x4000a9ec80) Stream removed, broadcasting: 3 I0830 18:05:10.970105 7 log.go:172] (0x40012049a0) (0x4000a9ed20) Stream removed, broadcasting: 5 Aug 30 18:05:10.970: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:05:10.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6904" for this suite. Aug 30 18:05:33.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:05:33.150: INFO: namespace pod-network-test-6904 deletion completed in 22.169305129s • [SLOW TEST:53.071 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:05:33.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Aug 30 18:05:33.271: INFO: Waiting up to 5m0s for pod "var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba" in namespace "var-expansion-5367" to be "success or failure" Aug 30 18:05:33.292: INFO: Pod "var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 20.251652ms Aug 30 18:05:35.330: INFO: Pod "var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058559372s Aug 30 18:05:37.337: INFO: Pod "var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065913837s STEP: Saw pod success Aug 30 18:05:37.338: INFO: Pod "var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba" satisfied condition "success or failure" Aug 30 18:05:37.342: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba container dapi-container: STEP: delete the pod Aug 30 18:05:37.513: INFO: Waiting for pod var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba to disappear Aug 30 18:05:37.556: INFO: Pod var-expansion-cb81675b-6b60-46a2-a3aa-eb89ea5da2ba no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:05:37.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5367" for this suite. Aug 30 18:05:43.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:05:43.832: INFO: namespace var-expansion-5367 deletion completed in 6.266570758s • [SLOW TEST:10.680 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:05:43.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b87d51d3-46e9-4a78-bfa4-af6e9d85deb5 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b87d51d3-46e9-4a78-bfa4-af6e9d85deb5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:07:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4103" for this suite. Aug 30 18:07:26.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:07:26.673: INFO: namespace projected-4103 deletion completed in 22.165047524s • [SLOW TEST:102.838 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:07:26.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 30 18:07:26.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3374' Aug 30 18:07:31.058: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 30 18:07:31.058: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Aug 30 18:07:31.115: INFO: scanned /root for discovery docs: Aug 30 18:07:31.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3374' Aug 30 18:07:49.602: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 30 18:07:49.602: INFO: stdout: "Created e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12\nScaling up e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Aug 30 18:07:49.602: INFO: stdout: "Created e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12\nScaling up e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Aug 30 18:07:49.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3374' Aug 30 18:07:50.914: INFO: stderr: "" Aug 30 18:07:50.914: INFO: stdout: "e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12-qtjns " Aug 30 18:07:50.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12-qtjns -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3374' Aug 30 18:07:52.256: INFO: stderr: "" Aug 30 18:07:52.256: INFO: stdout: "true" Aug 30 18:07:52.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12-qtjns -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3374' Aug 30 18:07:53.509: INFO: stderr: "" Aug 30 18:07:53.509: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Aug 30 18:07:53.509: INFO: e2e-test-nginx-rc-11a439a1f834c8d19af69bbd160c8f12-qtjns is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Aug 30 18:07:53.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3374' Aug 30 18:07:54.811: INFO: stderr: "" Aug 30 18:07:54.811: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:07:54.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3374" for this suite. Aug 30 18:08:16.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:08:16.967: INFO: namespace kubectl-3374 deletion completed in 22.146677522s • [SLOW TEST:50.290 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:08:16.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 30 18:08:17.775: INFO: Pod name wrapped-volume-race-b856b395-4146-4f46-b036-9181eca52b5b: Found 0 pods out of 5 Aug 30 18:08:22.802: INFO: Pod name wrapped-volume-race-b856b395-4146-4f46-b036-9181eca52b5b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b856b395-4146-4f46-b036-9181eca52b5b in namespace emptydir-wrapper-6914, will wait for the garbage collector to delete the pods Aug 30 18:08:40.975: INFO: Deleting ReplicationController wrapped-volume-race-b856b395-4146-4f46-b036-9181eca52b5b took: 9.9531ms Aug 30 18:08:41.276: INFO: Terminating ReplicationController wrapped-volume-race-b856b395-4146-4f46-b036-9181eca52b5b pods took: 300.735815ms STEP: Creating RC which spawns configmap-volume pods Aug 30 18:09:23.437: INFO: Pod name wrapped-volume-race-f58f6ea1-52b3-43ff-90e7-c035baafb8cc: Found 0 pods out of 5 Aug 30 18:09:28.455: INFO: Pod name wrapped-volume-race-f58f6ea1-52b3-43ff-90e7-c035baafb8cc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f58f6ea1-52b3-43ff-90e7-c035baafb8cc in namespace emptydir-wrapper-6914, will wait for the garbage collector to delete the pods Aug 30 18:09:46.561: INFO: Deleting ReplicationController wrapped-volume-race-f58f6ea1-52b3-43ff-90e7-c035baafb8cc took: 7.710491ms Aug 30 18:09:46.862: INFO: Terminating ReplicationController wrapped-volume-race-f58f6ea1-52b3-43ff-90e7-c035baafb8cc pods took: 301.018229ms STEP: Creating RC which spawns configmap-volume pods Aug 30 18:10:34.850: INFO: Pod name wrapped-volume-race-48598d12-61b5-497d-8106-5129a47d4c64: Found 0 pods out of 5 Aug 30 18:10:39.865: INFO: Pod name wrapped-volume-race-48598d12-61b5-497d-8106-5129a47d4c64: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-48598d12-61b5-497d-8106-5129a47d4c64 in namespace emptydir-wrapper-6914, will wait for the garbage collector to delete the pods Aug 30 18:10:57.996: INFO: Deleting ReplicationController wrapped-volume-race-48598d12-61b5-497d-8106-5129a47d4c64 took: 11.572793ms Aug 30 18:10:58.297: INFO: Terminating ReplicationController wrapped-volume-race-48598d12-61b5-497d-8106-5129a47d4c64 pods took: 301.056419ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:11:45.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6914" for this suite. Aug 30 18:11:53.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:11:53.657: INFO: namespace emptydir-wrapper-6914 deletion completed in 8.250410925s • [SLOW TEST:216.690 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:11:53.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b7a6d2d5-dfb3-4bc4-9edb-da84b1cfb54a STEP: Creating a pod to test consume configMaps Aug 30 18:11:53.782: INFO: Waiting up to 5m0s for pod "pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850" in namespace "configmap-2694" to be "success or failure" Aug 30 18:11:53.801: INFO: Pod "pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850": Phase="Pending", Reason="", readiness=false. Elapsed: 19.414373ms Aug 30 18:11:55.940: INFO: Pod "pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157854668s Aug 30 18:11:57.948: INFO: Pod "pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165787058s STEP: Saw pod success Aug 30 18:11:57.948: INFO: Pod "pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850" satisfied condition "success or failure" Aug 30 18:11:57.953: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850 container configmap-volume-test: STEP: delete the pod Aug 30 18:11:57.984: INFO: Waiting for pod pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850 to disappear Aug 30 18:11:58.001: INFO: Pod pod-configmaps-178d336c-c6c6-4601-91e9-b6d85c940850 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:11:58.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2694" for this suite. Aug 30 18:12:04.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:12:04.172: INFO: namespace configmap-2694 deletion completed in 6.164951911s • [SLOW TEST:10.513 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:12:04.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 30 18:12:04.253: INFO: Waiting up to 5m0s for pod "pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e" in namespace "emptydir-1374" to be "success or failure" Aug 30 18:12:04.287: INFO: Pod "pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.058279ms Aug 30 18:12:06.293: INFO: Pod "pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040081221s Aug 30 18:12:08.302: INFO: Pod "pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048391363s STEP: Saw pod success Aug 30 18:12:08.302: INFO: Pod "pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e" satisfied condition "success or failure" Aug 30 18:12:08.306: INFO: Trying to get logs from node iruya-worker2 pod pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e container test-container: STEP: delete the pod Aug 30 18:12:08.460: INFO: Waiting for pod pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e to disappear Aug 30 18:12:08.486: INFO: Pod pod-5e5d0060-b797-4163-8cac-6ceaf98e3d9e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:12:08.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1374" for this suite. Aug 30 18:12:14.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:12:14.690: INFO: namespace emptydir-1374 deletion completed in 6.196776402s • [SLOW TEST:10.516 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:12:14.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 30 18:12:29.013: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:29.014: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:29.096124 7 log.go:172] (0x40008353f0) (0x40023981e0) Create stream I0830 18:12:29.096327 7 log.go:172] (0x40008353f0) (0x40023981e0) Stream added, broadcasting: 1 I0830 18:12:29.100453 7 log.go:172] (0x40008353f0) Reply frame received for 1 I0830 18:12:29.100667 7 log.go:172] (0x40008353f0) (0x4001e46000) Create stream I0830 18:12:29.100851 7 log.go:172] (0x40008353f0) (0x4001e46000) Stream added, broadcasting: 3 I0830 18:12:29.102820 7 log.go:172] (0x40008353f0) Reply frame received for 3 I0830 18:12:29.103039 7 log.go:172] (0x40008353f0) (0x4002398280) Create stream I0830 18:12:29.103139 7 log.go:172] (0x40008353f0) (0x4002398280) Stream added, broadcasting: 5 I0830 18:12:29.104538 7 log.go:172] (0x40008353f0) Reply frame received for 5 I0830 18:12:29.185958 7 log.go:172] (0x40008353f0) Data frame received for 5 I0830 18:12:29.186115 7 log.go:172] (0x4002398280) (5) Data frame handling I0830 18:12:29.186565 7 log.go:172] (0x40008353f0) Data frame received for 3 I0830 18:12:29.186644 7 log.go:172] (0x4001e46000) (3) Data frame handling I0830 18:12:29.186732 7 log.go:172] (0x4001e46000) (3) Data frame sent I0830 18:12:29.186803 7 log.go:172] (0x40008353f0) Data frame received for 3 I0830 18:12:29.186869 7 log.go:172] (0x4001e46000) (3) Data frame handling I0830 18:12:29.187035 7 log.go:172] (0x40008353f0) Data frame received for 1 I0830 18:12:29.187107 7 log.go:172] (0x40023981e0) (1) Data frame handling I0830 18:12:29.187182 7 log.go:172] (0x40023981e0) (1) Data frame sent I0830 18:12:29.187278 7 log.go:172] (0x40008353f0) (0x40023981e0) Stream removed, broadcasting: 1 I0830 18:12:29.187591 7 log.go:172] (0x40008353f0) (0x40023981e0) Stream removed, broadcasting: 1 I0830 18:12:29.187679 7 log.go:172] (0x40008353f0) (0x4001e46000) Stream removed, broadcasting: 3 I0830 18:12:29.189042 7 log.go:172] (0x40008353f0) (0x4002398280) Stream removed, broadcasting: 5 I0830 18:12:29.189236 7 log.go:172] (0x40008353f0) Go away received Aug 30 18:12:29.189: INFO: Exec stderr: "" Aug 30 18:12:29.189: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:29.189: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:29.256114 7 log.go:172] (0x4000295600) (0x4001e46320) Create stream I0830 18:12:29.256249 7 log.go:172] (0x4000295600) (0x4001e46320) Stream added, broadcasting: 1 I0830 18:12:29.259540 7 log.go:172] (0x4000295600) Reply frame received for 1 I0830 18:12:29.259793 7 log.go:172] (0x4000295600) (0x40032c4000) Create stream I0830 18:12:29.259902 7 log.go:172] (0x4000295600) (0x40032c4000) Stream added, broadcasting: 3 I0830 18:12:29.261751 7 log.go:172] (0x4000295600) Reply frame received for 3 I0830 18:12:29.261906 7 log.go:172] (0x4000295600) (0x4002b68000) Create stream I0830 18:12:29.261996 7 log.go:172] (0x4000295600) (0x4002b68000) Stream added, broadcasting: 5 I0830 18:12:29.263408 7 log.go:172] (0x4000295600) Reply frame received for 5 I0830 18:12:29.324409 7 log.go:172] (0x4000295600) Data frame received for 5 I0830 18:12:29.324596 7 log.go:172] (0x4002b68000) (5) Data frame handling I0830 18:12:29.324704 7 log.go:172] (0x4000295600) Data frame received for 3 I0830 18:12:29.324908 7 log.go:172] (0x40032c4000) (3) Data frame handling I0830 18:12:29.325005 7 log.go:172] (0x40032c4000) (3) Data frame sent I0830 18:12:29.325071 7 log.go:172] (0x4000295600) Data frame received for 3 I0830 18:12:29.325133 7 log.go:172] (0x40032c4000) (3) Data frame handling I0830 18:12:29.325670 7 log.go:172] (0x4000295600) Data frame received for 1 I0830 18:12:29.325780 7 log.go:172] (0x4001e46320) (1) Data frame handling I0830 18:12:29.325860 7 log.go:172] (0x4001e46320) (1) Data frame sent I0830 18:12:29.326022 7 log.go:172] (0x4000295600) (0x4001e46320) Stream removed, broadcasting: 1 I0830 18:12:29.326175 7 log.go:172] (0x4000295600) Go away received I0830 18:12:29.326399 7 log.go:172] (0x4000295600) (0x4001e46320) Stream removed, broadcasting: 1 I0830 18:12:29.326502 7 log.go:172] (0x4000295600) (0x40032c4000) Stream removed, broadcasting: 3 I0830 18:12:29.326610 7 log.go:172] (0x4000295600) (0x4002b68000) Stream removed, broadcasting: 5 Aug 30 18:12:29.326: INFO: Exec stderr: "" Aug 30 18:12:29.327: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:29.327: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:29.386770 7 log.go:172] (0x4000a82790) (0x4002b68320) Create stream I0830 18:12:29.386891 7 log.go:172] (0x4000a82790) (0x4002b68320) Stream added, broadcasting: 1 I0830 18:12:29.390187 7 log.go:172] (0x4000a82790) Reply frame received for 1 I0830 18:12:29.390378 7 log.go:172] (0x4000a82790) (0x400360a000) Create stream I0830 18:12:29.390449 7 log.go:172] (0x4000a82790) (0x400360a000) Stream added, broadcasting: 3 I0830 18:12:29.392068 7 log.go:172] (0x4000a82790) Reply frame received for 3 I0830 18:12:29.392251 7 log.go:172] (0x4000a82790) (0x4002398320) Create stream I0830 18:12:29.392350 7 log.go:172] (0x4000a82790) (0x4002398320) Stream added, broadcasting: 5 I0830 18:12:29.394174 7 log.go:172] (0x4000a82790) Reply frame received for 5 I0830 18:12:29.466195 7 log.go:172] (0x4000a82790) Data frame received for 5 I0830 18:12:29.466325 7 log.go:172] (0x4002398320) (5) Data frame handling I0830 18:12:29.466491 7 log.go:172] (0x4000a82790) Data frame received for 3 I0830 18:12:29.466667 7 log.go:172] (0x400360a000) (3) Data frame handling I0830 18:12:29.466777 7 log.go:172] (0x400360a000) (3) Data frame sent I0830 18:12:29.466862 7 log.go:172] (0x4000a82790) Data frame received for 3 I0830 18:12:29.466928 7 log.go:172] (0x400360a000) (3) Data frame handling I0830 18:12:29.467315 7 log.go:172] (0x4000a82790) Data frame received for 1 I0830 18:12:29.467405 7 log.go:172] (0x4002b68320) (1) Data frame handling I0830 18:12:29.467477 7 log.go:172] (0x4002b68320) (1) Data frame sent I0830 18:12:29.467570 7 log.go:172] (0x4000a82790) (0x4002b68320) Stream removed, broadcasting: 1 I0830 18:12:29.467662 7 log.go:172] (0x4000a82790) Go away received I0830 18:12:29.468161 7 log.go:172] (0x4000a82790) (0x4002b68320) Stream removed, broadcasting: 1 I0830 18:12:29.468331 7 log.go:172] (0x4000a82790) (0x400360a000) Stream removed, broadcasting: 3 I0830 18:12:29.468427 7 log.go:172] (0x4000a82790) (0x4002398320) Stream removed, broadcasting: 5 Aug 30 18:12:29.468: INFO: Exec stderr: "" Aug 30 18:12:29.468: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:29.468: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:29.530964 7 log.go:172] (0x4000d40420) (0x4003192280) Create stream I0830 18:12:29.531105 7 log.go:172] (0x4000d40420) (0x4003192280) Stream added, broadcasting: 1 I0830 18:12:29.535073 7 log.go:172] (0x4000d40420) Reply frame received for 1 I0830 18:12:29.535348 7 log.go:172] (0x4000d40420) (0x40023983c0) Create stream I0830 18:12:29.535477 7 log.go:172] (0x4000d40420) (0x40023983c0) Stream added, broadcasting: 3 I0830 18:12:29.537468 7 log.go:172] (0x4000d40420) Reply frame received for 3 I0830 18:12:29.537712 7 log.go:172] (0x4000d40420) (0x400360a0a0) Create stream I0830 18:12:29.537864 7 log.go:172] (0x4000d40420) (0x400360a0a0) Stream added, broadcasting: 5 I0830 18:12:29.539836 7 log.go:172] (0x4000d40420) Reply frame received for 5 I0830 18:12:29.604483 7 log.go:172] (0x4000d40420) Data frame received for 5 I0830 18:12:29.604613 7 log.go:172] (0x400360a0a0) (5) Data frame handling I0830 18:12:29.604886 7 log.go:172] (0x4000d40420) Data frame received for 3 I0830 18:12:29.605074 7 log.go:172] (0x40023983c0) (3) Data frame handling I0830 18:12:29.605173 7 log.go:172] (0x40023983c0) (3) Data frame sent I0830 18:12:29.605243 7 log.go:172] (0x4000d40420) Data frame received for 3 I0830 18:12:29.605309 7 log.go:172] (0x40023983c0) (3) Data frame handling I0830 18:12:29.606422 7 log.go:172] (0x4000d40420) Data frame received for 1 I0830 18:12:29.606490 7 log.go:172] (0x4003192280) (1) Data frame handling I0830 18:12:29.606552 7 log.go:172] (0x4003192280) (1) Data frame sent I0830 18:12:29.606618 7 log.go:172] (0x4000d40420) (0x4003192280) Stream removed, broadcasting: 1 I0830 18:12:29.606696 7 log.go:172] (0x4000d40420) Go away received I0830 18:12:29.607066 7 log.go:172] (0x4000d40420) (0x4003192280) Stream removed, broadcasting: 1 I0830 18:12:29.607160 7 log.go:172] (0x4000d40420) (0x40023983c0) Stream removed, broadcasting: 3 I0830 18:12:29.607249 7 log.go:172] (0x4000d40420) (0x400360a0a0) Stream removed, broadcasting: 5 Aug 30 18:12:29.607: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 30 18:12:29.607: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:29.607: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:29.662684 7 log.go:172] (0x400086cdc0) (0x400360a3c0) Create stream I0830 18:12:29.662845 7 log.go:172] (0x400086cdc0) (0x400360a3c0) Stream added, broadcasting: 1 I0830 18:12:29.666468 7 log.go:172] (0x400086cdc0) Reply frame received for 1 I0830 18:12:29.666669 7 log.go:172] (0x400086cdc0) (0x40032c40a0) Create stream I0830 18:12:29.666753 7 log.go:172] (0x400086cdc0) (0x40032c40a0) Stream added, broadcasting: 3 I0830 18:12:29.668577 7 log.go:172] (0x400086cdc0) Reply frame received for 3 I0830 18:12:29.668887 7 log.go:172] (0x400086cdc0) (0x40032c4140) Create stream I0830 18:12:29.668954 7 log.go:172] (0x400086cdc0) (0x40032c4140) Stream added, broadcasting: 5 I0830 18:12:29.670121 7 log.go:172] (0x400086cdc0) Reply frame received for 5 I0830 18:12:29.757585 7 log.go:172] (0x400086cdc0) Data frame received for 3 I0830 18:12:29.757707 7 log.go:172] (0x40032c40a0) (3) Data frame handling I0830 18:12:29.757865 7 log.go:172] (0x400086cdc0) Data frame received for 5 I0830 18:12:29.758050 7 log.go:172] (0x40032c4140) (5) Data frame handling I0830 18:12:29.758238 7 log.go:172] (0x40032c40a0) (3) Data frame sent I0830 18:12:29.758325 7 log.go:172] (0x400086cdc0) Data frame received for 3 I0830 18:12:29.758387 7 log.go:172] (0x40032c40a0) (3) Data frame handling I0830 18:12:29.758579 7 log.go:172] (0x400086cdc0) Data frame received for 1 I0830 18:12:29.758672 7 log.go:172] (0x400360a3c0) (1) Data frame handling I0830 18:12:29.758736 7 log.go:172] (0x400360a3c0) (1) Data frame sent I0830 18:12:29.758803 7 log.go:172] (0x400086cdc0) (0x400360a3c0) Stream removed, broadcasting: 1 I0830 18:12:29.758886 7 log.go:172] (0x400086cdc0) Go away received I0830 18:12:29.759201 7 log.go:172] (0x400086cdc0) (0x400360a3c0) Stream removed, broadcasting: 1 I0830 18:12:29.759277 7 log.go:172] (0x400086cdc0) (0x40032c40a0) Stream removed, broadcasting: 3 I0830 18:12:29.759353 7 log.go:172] (0x400086cdc0) (0x40032c4140) Stream removed, broadcasting: 5 Aug 30 18:12:29.759: INFO: Exec stderr: "" Aug 30 18:12:29.759: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:29.759: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:29.813258 7 log.go:172] (0x4000295ef0) (0x4001e466e0) Create stream I0830 18:12:29.813469 7 log.go:172] (0x4000295ef0) (0x4001e466e0) Stream added, broadcasting: 1 I0830 18:12:29.824094 7 log.go:172] (0x4000295ef0) Reply frame received for 1 I0830 18:12:29.824281 7 log.go:172] (0x4000295ef0) (0x400360a460) Create stream I0830 18:12:29.824357 7 log.go:172] (0x4000295ef0) (0x400360a460) Stream added, broadcasting: 3 I0830 18:12:29.826286 7 log.go:172] (0x4000295ef0) Reply frame received for 3 I0830 18:12:29.826440 7 log.go:172] (0x4000295ef0) (0x40032c4280) Create stream I0830 18:12:29.826533 7 log.go:172] (0x4000295ef0) (0x40032c4280) Stream added, broadcasting: 5 I0830 18:12:29.828527 7 log.go:172] (0x4000295ef0) Reply frame received for 5 I0830 18:12:29.903135 7 log.go:172] (0x4000295ef0) Data frame received for 5 I0830 18:12:29.903306 7 log.go:172] (0x40032c4280) (5) Data frame handling I0830 18:12:29.903474 7 log.go:172] (0x4000295ef0) Data frame received for 3 I0830 18:12:29.903640 7 log.go:172] (0x400360a460) (3) Data frame handling I0830 18:12:29.903786 7 log.go:172] (0x400360a460) (3) Data frame sent I0830 18:12:29.903942 7 log.go:172] (0x4000295ef0) Data frame received for 3 I0830 18:12:29.904074 7 log.go:172] (0x400360a460) (3) Data frame handling I0830 18:12:29.904334 7 log.go:172] (0x4000295ef0) Data frame received for 1 I0830 18:12:29.904423 7 log.go:172] (0x4001e466e0) (1) Data frame handling I0830 18:12:29.904519 7 log.go:172] (0x4001e466e0) (1) Data frame sent I0830 18:12:29.904609 7 log.go:172] (0x4000295ef0) (0x4001e466e0) Stream removed, broadcasting: 1 I0830 18:12:29.904827 7 log.go:172] (0x4000295ef0) Go away received I0830 18:12:29.905157 7 log.go:172] (0x4000295ef0) (0x4001e466e0) Stream removed, broadcasting: 1 I0830 18:12:29.905252 7 log.go:172] (0x4000295ef0) (0x400360a460) Stream removed, broadcasting: 3 I0830 18:12:29.905330 7 log.go:172] (0x4000295ef0) (0x40032c4280) Stream removed, broadcasting: 5 Aug 30 18:12:29.905: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 30 18:12:29.905: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:29.905: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:29.964994 7 log.go:172] (0x400078db80) (0x40032c4780) Create stream I0830 18:12:29.965152 7 log.go:172] (0x400078db80) (0x40032c4780) Stream added, broadcasting: 1 I0830 18:12:29.969487 7 log.go:172] (0x400078db80) Reply frame received for 1 I0830 18:12:29.969771 7 log.go:172] (0x400078db80) (0x40032c48c0) Create stream I0830 18:12:29.969886 7 log.go:172] (0x400078db80) (0x40032c48c0) Stream added, broadcasting: 3 I0830 18:12:29.972226 7 log.go:172] (0x400078db80) Reply frame received for 3 I0830 18:12:29.972437 7 log.go:172] (0x400078db80) (0x4001e46820) Create stream I0830 18:12:29.972548 7 log.go:172] (0x400078db80) (0x4001e46820) Stream added, broadcasting: 5 I0830 18:12:29.975012 7 log.go:172] (0x400078db80) Reply frame received for 5 I0830 18:12:30.052455 7 log.go:172] (0x400078db80) Data frame received for 3 I0830 18:12:30.052624 7 log.go:172] (0x40032c48c0) (3) Data frame handling I0830 18:12:30.052830 7 log.go:172] (0x40032c48c0) (3) Data frame sent I0830 18:12:30.052918 7 log.go:172] (0x400078db80) Data frame received for 3 I0830 18:12:30.052995 7 log.go:172] (0x40032c48c0) (3) Data frame handling I0830 18:12:30.053116 7 log.go:172] (0x400078db80) Data frame received for 5 I0830 18:12:30.053227 7 log.go:172] (0x4001e46820) (5) Data frame handling I0830 18:12:30.054397 7 log.go:172] (0x400078db80) Data frame received for 1 I0830 18:12:30.054487 7 log.go:172] (0x40032c4780) (1) Data frame handling I0830 18:12:30.054571 7 log.go:172] (0x40032c4780) (1) Data frame sent I0830 18:12:30.054645 7 log.go:172] (0x400078db80) (0x40032c4780) Stream removed, broadcasting: 1 I0830 18:12:30.054725 7 log.go:172] (0x400078db80) Go away received I0830 18:12:30.055161 7 log.go:172] (0x400078db80) (0x40032c4780) Stream removed, broadcasting: 1 I0830 18:12:30.055288 7 log.go:172] (0x400078db80) (0x40032c48c0) Stream removed, broadcasting: 3 I0830 18:12:30.055388 7 log.go:172] (0x400078db80) (0x4001e46820) Stream removed, broadcasting: 5 Aug 30 18:12:30.055: INFO: Exec stderr: "" Aug 30 18:12:30.055: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:30.055: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:30.117579 7 log.go:172] (0x4000d41600) (0x4003192640) Create stream I0830 18:12:30.117793 7 log.go:172] (0x4000d41600) (0x4003192640) Stream added, broadcasting: 1 I0830 18:12:30.123125 7 log.go:172] (0x4000d41600) Reply frame received for 1 I0830 18:12:30.123323 7 log.go:172] (0x4000d41600) (0x400360a500) Create stream I0830 18:12:30.123432 7 log.go:172] (0x4000d41600) (0x400360a500) Stream added, broadcasting: 3 I0830 18:12:30.125564 7 log.go:172] (0x4000d41600) Reply frame received for 3 I0830 18:12:30.125745 7 log.go:172] (0x4000d41600) (0x40031926e0) Create stream I0830 18:12:30.125849 7 log.go:172] (0x4000d41600) (0x40031926e0) Stream added, broadcasting: 5 I0830 18:12:30.127668 7 log.go:172] (0x4000d41600) Reply frame received for 5 I0830 18:12:30.201795 7 log.go:172] (0x4000d41600) Data frame received for 5 I0830 18:12:30.201980 7 log.go:172] (0x40031926e0) (5) Data frame handling I0830 18:12:30.202097 7 log.go:172] (0x4000d41600) Data frame received for 3 I0830 18:12:30.202226 7 log.go:172] (0x400360a500) (3) Data frame handling I0830 18:12:30.202371 7 log.go:172] (0x400360a500) (3) Data frame sent I0830 18:12:30.202473 7 log.go:172] (0x4000d41600) Data frame received for 3 I0830 18:12:30.202567 7 log.go:172] (0x400360a500) (3) Data frame handling I0830 18:12:30.203141 7 log.go:172] (0x4000d41600) Data frame received for 1 I0830 18:12:30.203285 7 log.go:172] (0x4003192640) (1) Data frame handling I0830 18:12:30.203391 7 log.go:172] (0x4003192640) (1) Data frame sent I0830 18:12:30.203539 7 log.go:172] (0x4000d41600) (0x4003192640) Stream removed, broadcasting: 1 I0830 18:12:30.203683 7 log.go:172] (0x4000d41600) Go away received I0830 18:12:30.204141 7 log.go:172] (0x4000d41600) (0x4003192640) Stream removed, broadcasting: 1 I0830 18:12:30.204312 7 log.go:172] (0x4000d41600) (0x400360a500) Stream removed, broadcasting: 3 I0830 18:12:30.204413 7 log.go:172] (0x4000d41600) (0x40031926e0) Stream removed, broadcasting: 5 Aug 30 18:12:30.204: INFO: Exec stderr: "" Aug 30 18:12:30.204: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:30.205: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:30.273112 7 log.go:172] (0x4001966210) (0x4003192a00) Create stream I0830 18:12:30.273316 7 log.go:172] (0x4001966210) (0x4003192a00) Stream added, broadcasting: 1 I0830 18:12:30.278920 7 log.go:172] (0x4001966210) Reply frame received for 1 I0830 18:12:30.279173 7 log.go:172] (0x4001966210) (0x4002b683c0) Create stream I0830 18:12:30.279289 7 log.go:172] (0x4001966210) (0x4002b683c0) Stream added, broadcasting: 3 I0830 18:12:30.281385 7 log.go:172] (0x4001966210) Reply frame received for 3 I0830 18:12:30.281607 7 log.go:172] (0x4001966210) (0x4002398500) Create stream I0830 18:12:30.281707 7 log.go:172] (0x4001966210) (0x4002398500) Stream added, broadcasting: 5 I0830 18:12:30.283582 7 log.go:172] (0x4001966210) Reply frame received for 5 I0830 18:12:30.366337 7 log.go:172] (0x4001966210) Data frame received for 5 I0830 18:12:30.366514 7 log.go:172] (0x4002398500) (5) Data frame handling I0830 18:12:30.366692 7 log.go:172] (0x4001966210) Data frame received for 3 I0830 18:12:30.366862 7 log.go:172] (0x4002b683c0) (3) Data frame handling I0830 18:12:30.367056 7 log.go:172] (0x4002b683c0) (3) Data frame sent I0830 18:12:30.367287 7 log.go:172] (0x4001966210) Data frame received for 3 I0830 18:12:30.367498 7 log.go:172] (0x4002b683c0) (3) Data frame handling I0830 18:12:30.367750 7 log.go:172] (0x4001966210) Data frame received for 1 I0830 18:12:30.367910 7 log.go:172] (0x4003192a00) (1) Data frame handling I0830 18:12:30.368043 7 log.go:172] (0x4003192a00) (1) Data frame sent I0830 18:12:30.368183 7 log.go:172] (0x4001966210) (0x4003192a00) Stream removed, broadcasting: 1 I0830 18:12:30.368331 7 log.go:172] (0x4001966210) Go away received I0830 18:12:30.368914 7 log.go:172] (0x4001966210) (0x4003192a00) Stream removed, broadcasting: 1 I0830 18:12:30.369032 7 log.go:172] (0x4001966210) (0x4002b683c0) Stream removed, broadcasting: 3 I0830 18:12:30.369128 7 log.go:172] (0x4001966210) (0x4002398500) Stream removed, broadcasting: 5 Aug 30 18:12:30.369: INFO: Exec stderr: "" Aug 30 18:12:30.369: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8702 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 30 18:12:30.369: INFO: >>> kubeConfig: /root/.kube/config I0830 18:12:30.435702 7 log.go:172] (0x4000a83760) (0x4002b688c0) Create stream I0830 18:12:30.435887 7 log.go:172] (0x4000a83760) (0x4002b688c0) Stream added, broadcasting: 1 I0830 18:12:30.442943 7 log.go:172] (0x4000a83760) Reply frame received for 1 I0830 18:12:30.443340 7 log.go:172] (0x4000a83760) (0x40032c4a00) Create stream I0830 18:12:30.443523 7 log.go:172] (0x4000a83760) (0x40032c4a00) Stream added, broadcasting: 3 I0830 18:12:30.446262 7 log.go:172] (0x4000a83760) Reply frame received for 3 I0830 18:12:30.446449 7 log.go:172] (0x4000a83760) (0x4002b68960) Create stream I0830 18:12:30.446541 7 log.go:172] (0x4000a83760) (0x4002b68960) Stream added, broadcasting: 5 I0830 18:12:30.450534 7 log.go:172] (0x4000a83760) Reply frame received for 5 I0830 18:12:30.526348 7 log.go:172] (0x4000a83760) Data frame received for 5 I0830 18:12:30.526466 7 log.go:172] (0x4002b68960) (5) Data frame handling I0830 18:12:30.526633 7 log.go:172] (0x4000a83760) Data frame received for 3 I0830 18:12:30.526774 7 log.go:172] (0x40032c4a00) (3) Data frame handling I0830 18:12:30.527014 7 log.go:172] (0x40032c4a00) (3) Data frame sent I0830 18:12:30.527109 7 log.go:172] (0x4000a83760) Data frame received for 3 I0830 18:12:30.527189 7 log.go:172] (0x40032c4a00) (3) Data frame handling I0830 18:12:30.528117 7 log.go:172] (0x4000a83760) Data frame received for 1 I0830 18:12:30.528290 7 log.go:172] (0x4002b688c0) (1) Data frame handling I0830 18:12:30.528420 7 log.go:172] (0x4002b688c0) (1) Data frame sent I0830 18:12:30.528524 7 log.go:172] (0x4000a83760) (0x4002b688c0) Stream removed, broadcasting: 1 I0830 18:12:30.528646 7 log.go:172] (0x4000a83760) Go away received I0830 18:12:30.529073 7 log.go:172] (0x4000a83760) (0x4002b688c0) Stream removed, broadcasting: 1 I0830 18:12:30.529168 7 log.go:172] (0x4000a83760) (0x40032c4a00) Stream removed, broadcasting: 3 I0830 18:12:30.529246 7 log.go:172] (0x4000a83760) (0x4002b68960) Stream removed, broadcasting: 5 Aug 30 18:12:30.529: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:12:30.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8702" for this suite. Aug 30 18:13:16.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:13:16.743: INFO: namespace e2e-kubelet-etc-hosts-8702 deletion completed in 46.206028779s • [SLOW TEST:62.053 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:13:16.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4475 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Aug 30 18:13:16.990: INFO: Found 0 stateful pods, waiting for 3 Aug 30 18:13:26.999: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 30 18:13:26.999: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 30 18:13:26.999: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 30 18:13:37.001: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 30 18:13:37.001: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 30 18:13:37.001: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 30 18:13:37.046: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 30 18:13:47.121: INFO: Updating stateful set ss2 Aug 30 18:13:47.131: INFO: Waiting for Pod statefulset-4475/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Aug 30 18:13:57.280: INFO: Found 2 stateful pods, waiting for 3 Aug 30 18:14:07.289: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 30 18:14:07.289: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 30 18:14:07.290: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 30 18:14:07.326: INFO: Updating stateful set ss2 Aug 30 18:14:07.341: INFO: Waiting for Pod statefulset-4475/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 30 18:14:17.381: INFO: Updating stateful set ss2 Aug 30 18:14:17.436: INFO: Waiting for StatefulSet statefulset-4475/ss2 to complete update Aug 30 18:14:17.437: INFO: Waiting for Pod statefulset-4475/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 30 18:14:27.477: INFO: Waiting for StatefulSet statefulset-4475/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 30 18:14:37.453: INFO: Deleting all statefulset in ns statefulset-4475 Aug 30 18:14:37.458: INFO: Scaling statefulset ss2 to 0 Aug 30 18:14:57.481: INFO: Waiting for statefulset status.replicas updated to 0 Aug 30 18:14:57.486: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:14:57.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4475" for this suite. Aug 30 18:15:03.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:15:03.664: INFO: namespace statefulset-4475 deletion completed in 6.149307398s • [SLOW TEST:106.920 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:15:03.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Aug 30 18:15:08.324: INFO: Successfully updated pod "annotationupdate6841b2c9-dae7-4fde-9892-289743ae26ba" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:15:10.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-730" for this suite. Aug 30 18:15:32.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:15:32.622: INFO: namespace downward-api-730 deletion completed in 22.177663324s • [SLOW TEST:28.957 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:15:32.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Aug 30 18:15:32.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5735' Aug 30 18:15:34.393: INFO: stderr: "" Aug 30 18:15:34.394: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 30 18:15:34.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5735' Aug 30 18:15:35.689: INFO: stderr: "" Aug 30 18:15:35.689: INFO: stdout: "update-demo-nautilus-dd7xf update-demo-nautilus-xc97x " Aug 30 18:15:35.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:15:36.974: INFO: stderr: "" Aug 30 18:15:36.974: INFO: stdout: "" Aug 30 18:15:36.974: INFO: update-demo-nautilus-dd7xf is created but not running Aug 30 18:15:41.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5735' Aug 30 18:15:43.281: INFO: stderr: "" Aug 30 18:15:43.281: INFO: stdout: "update-demo-nautilus-dd7xf update-demo-nautilus-xc97x " Aug 30 18:15:43.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:15:44.547: INFO: stderr: "" Aug 30 18:15:44.547: INFO: stdout: "true" Aug 30 18:15:44.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7xf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:15:45.822: INFO: stderr: "" Aug 30 18:15:45.822: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 18:15:45.822: INFO: validating pod update-demo-nautilus-dd7xf Aug 30 18:15:45.829: INFO: got data: { "image": "nautilus.jpg" } Aug 30 18:15:45.829: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 18:15:45.829: INFO: update-demo-nautilus-dd7xf is verified up and running Aug 30 18:15:45.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xc97x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:15:47.125: INFO: stderr: "" Aug 30 18:15:47.125: INFO: stdout: "true" Aug 30 18:15:47.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xc97x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:15:48.416: INFO: stderr: "" Aug 30 18:15:48.416: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 18:15:48.417: INFO: validating pod update-demo-nautilus-xc97x Aug 30 18:15:48.422: INFO: got data: { "image": "nautilus.jpg" } Aug 30 18:15:48.423: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 18:15:48.423: INFO: update-demo-nautilus-xc97x is verified up and running STEP: scaling down the replication controller Aug 30 18:15:48.429: INFO: scanned /root for discovery docs: Aug 30 18:15:48.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5735' Aug 30 18:15:51.091: INFO: stderr: "" Aug 30 18:15:51.091: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 30 18:15:51.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5735' Aug 30 18:15:52.427: INFO: stderr: "" Aug 30 18:15:52.428: INFO: stdout: "update-demo-nautilus-dd7xf update-demo-nautilus-xc97x " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 30 18:15:57.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5735' Aug 30 18:15:58.791: INFO: stderr: "" Aug 30 18:15:58.791: INFO: stdout: "update-demo-nautilus-dd7xf update-demo-nautilus-xc97x " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 30 18:16:03.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5735' Aug 30 18:16:05.066: INFO: stderr: "" Aug 30 18:16:05.066: INFO: stdout: "update-demo-nautilus-dd7xf " Aug 30 18:16:05.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:16:06.365: INFO: stderr: "" Aug 30 18:16:06.365: INFO: stdout: "true" Aug 30 18:16:06.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7xf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:16:07.649: INFO: stderr: "" Aug 30 18:16:07.649: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 18:16:07.649: INFO: validating pod update-demo-nautilus-dd7xf Aug 30 18:16:07.662: INFO: got data: { "image": "nautilus.jpg" } Aug 30 18:16:07.662: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 18:16:07.662: INFO: update-demo-nautilus-dd7xf is verified up and running STEP: scaling up the replication controller Aug 30 18:16:07.668: INFO: scanned /root for discovery docs: Aug 30 18:16:07.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5735' Aug 30 18:16:09.070: INFO: stderr: "" Aug 30 18:16:09.070: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 30 18:16:09.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5735' Aug 30 18:16:10.362: INFO: stderr: "" Aug 30 18:16:10.362: INFO: stdout: "update-demo-nautilus-dd7xf update-demo-nautilus-rdw85 " Aug 30 18:16:10.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:16:11.628: INFO: stderr: "" Aug 30 18:16:11.628: INFO: stdout: "true" Aug 30 18:16:11.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dd7xf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:16:12.914: INFO: stderr: "" Aug 30 18:16:12.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 18:16:12.914: INFO: validating pod update-demo-nautilus-dd7xf Aug 30 18:16:12.918: INFO: got data: { "image": "nautilus.jpg" } Aug 30 18:16:12.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 18:16:12.919: INFO: update-demo-nautilus-dd7xf is verified up and running Aug 30 18:16:12.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rdw85 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:16:14.199: INFO: stderr: "" Aug 30 18:16:14.199: INFO: stdout: "true" Aug 30 18:16:14.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rdw85 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5735' Aug 30 18:16:15.511: INFO: stderr: "" Aug 30 18:16:15.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 18:16:15.511: INFO: validating pod update-demo-nautilus-rdw85 Aug 30 18:16:15.517: INFO: got data: { "image": "nautilus.jpg" } Aug 30 18:16:15.517: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 18:16:15.517: INFO: update-demo-nautilus-rdw85 is verified up and running STEP: using delete to clean up resources Aug 30 18:16:15.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5735' Aug 30 18:16:16.813: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 18:16:16.813: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 30 18:16:16.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5735' Aug 30 18:16:18.145: INFO: stderr: "No resources found.\n" Aug 30 18:16:18.145: INFO: stdout: "" Aug 30 18:16:18.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5735 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 30 18:16:19.473: INFO: stderr: "" Aug 30 18:16:19.473: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:16:19.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5735" for this suite. Aug 30 18:16:25.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:16:25.644: INFO: namespace kubectl-5735 deletion completed in 6.162643199s • [SLOW TEST:53.021 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:16:25.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6be38b5a-15fa-4323-bc60-149fe745abda STEP: Creating a pod to test consume secrets Aug 30 18:16:25.837: INFO: Waiting up to 5m0s for pod "pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649" in namespace "secrets-4440" to be "success or failure" Aug 30 18:16:25.869: INFO: Pod "pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649": Phase="Pending", Reason="", readiness=false. Elapsed: 31.745059ms Aug 30 18:16:27.957: INFO: Pod "pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118981443s Aug 30 18:16:30.010: INFO: Pod "pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172422475s STEP: Saw pod success Aug 30 18:16:30.010: INFO: Pod "pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649" satisfied condition "success or failure" Aug 30 18:16:30.032: INFO: Trying to get logs from node iruya-worker pod pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649 container secret-volume-test: STEP: delete the pod Aug 30 18:16:30.258: INFO: Waiting for pod pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649 to disappear Aug 30 18:16:30.382: INFO: Pod pod-secrets-ef18490f-4e4b-4897-848f-2d5e884f8649 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:16:30.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4440" for this suite. Aug 30 18:16:36.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:16:36.595: INFO: namespace secrets-4440 deletion completed in 6.188493714s • [SLOW TEST:10.949 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:16:36.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Aug 30 18:16:36.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4504' Aug 30 18:16:38.365: INFO: stderr: "" Aug 30 18:16:38.365: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 30 18:16:38.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4504' Aug 30 18:16:39.639: INFO: stderr: "" Aug 30 18:16:39.639: INFO: stdout: "update-demo-nautilus-qd9bd update-demo-nautilus-rvl87 " Aug 30 18:16:39.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qd9bd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4504' Aug 30 18:16:40.959: INFO: stderr: "" Aug 30 18:16:40.959: INFO: stdout: "" Aug 30 18:16:40.959: INFO: update-demo-nautilus-qd9bd is created but not running Aug 30 18:16:45.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4504' Aug 30 18:16:47.272: INFO: stderr: "" Aug 30 18:16:47.272: INFO: stdout: "update-demo-nautilus-qd9bd update-demo-nautilus-rvl87 " Aug 30 18:16:47.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qd9bd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4504' Aug 30 18:16:48.541: INFO: stderr: "" Aug 30 18:16:48.541: INFO: stdout: "true" Aug 30 18:16:48.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qd9bd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4504' Aug 30 18:16:49.851: INFO: stderr: "" Aug 30 18:16:49.851: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 18:16:49.852: INFO: validating pod update-demo-nautilus-qd9bd Aug 30 18:16:49.873: INFO: got data: { "image": "nautilus.jpg" } Aug 30 18:16:49.873: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 18:16:49.873: INFO: update-demo-nautilus-qd9bd is verified up and running Aug 30 18:16:49.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvl87 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4504' Aug 30 18:16:51.156: INFO: stderr: "" Aug 30 18:16:51.156: INFO: stdout: "true" Aug 30 18:16:51.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvl87 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4504' Aug 30 18:16:52.441: INFO: stderr: "" Aug 30 18:16:52.441: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 30 18:16:52.441: INFO: validating pod update-demo-nautilus-rvl87 Aug 30 18:16:52.447: INFO: got data: { "image": "nautilus.jpg" } Aug 30 18:16:52.447: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 30 18:16:52.447: INFO: update-demo-nautilus-rvl87 is verified up and running STEP: using delete to clean up resources Aug 30 18:16:52.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4504' Aug 30 18:16:53.734: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 30 18:16:53.734: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 30 18:16:53.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4504' Aug 30 18:16:55.098: INFO: stderr: "No resources found.\n" Aug 30 18:16:55.098: INFO: stdout: "" Aug 30 18:16:55.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4504 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 30 18:16:56.441: INFO: stderr: "" Aug 30 18:16:56.441: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:16:56.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4504" for this suite. Aug 30 18:17:18.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:17:18.601: INFO: namespace kubectl-4504 deletion completed in 22.149979823s • [SLOW TEST:42.005 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:17:18.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 30 18:17:18.711: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2851,SelfLink:/api/v1/namespaces/watch-2851/configmaps/e2e-watch-test-label-changed,UID:49e49951-eb9b-437b-b149-e895389a8aad,ResourceVersion:4078948,Generation:0,CreationTimestamp:2020-08-30 18:17:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 30 18:17:18.712: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2851,SelfLink:/api/v1/namespaces/watch-2851/configmaps/e2e-watch-test-label-changed,UID:49e49951-eb9b-437b-b149-e895389a8aad,ResourceVersion:4078949,Generation:0,CreationTimestamp:2020-08-30 18:17:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 30 18:17:18.712: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2851,SelfLink:/api/v1/namespaces/watch-2851/configmaps/e2e-watch-test-label-changed,UID:49e49951-eb9b-437b-b149-e895389a8aad,ResourceVersion:4078950,Generation:0,CreationTimestamp:2020-08-30 18:17:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 30 18:17:28.783: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2851,SelfLink:/api/v1/namespaces/watch-2851/configmaps/e2e-watch-test-label-changed,UID:49e49951-eb9b-437b-b149-e895389a8aad,ResourceVersion:4078971,Generation:0,CreationTimestamp:2020-08-30 18:17:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 30 18:17:28.783: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2851,SelfLink:/api/v1/namespaces/watch-2851/configmaps/e2e-watch-test-label-changed,UID:49e49951-eb9b-437b-b149-e895389a8aad,ResourceVersion:4078972,Generation:0,CreationTimestamp:2020-08-30 18:17:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 30 18:17:28.784: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2851,SelfLink:/api/v1/namespaces/watch-2851/configmaps/e2e-watch-test-label-changed,UID:49e49951-eb9b-437b-b149-e895389a8aad,ResourceVersion:4078973,Generation:0,CreationTimestamp:2020-08-30 18:17:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:17:28.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2851" for this suite. Aug 30 18:17:34.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:17:34.965: INFO: namespace watch-2851 deletion completed in 6.156048964s • [SLOW TEST:16.363 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:17:34.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5d188c96-c703-43bf-b6b8-e31ac7ff5288 STEP: Creating a pod to test consume secrets Aug 30 18:17:35.073: INFO: Waiting up to 5m0s for pod "pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f" in namespace "secrets-6662" to be "success or failure" Aug 30 18:17:35.094: INFO: Pod "pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.329353ms Aug 30 18:17:37.572: INFO: Pod "pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499427322s Aug 30 18:17:39.580: INFO: Pod "pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f": Phase="Running", Reason="", readiness=true. Elapsed: 4.506987027s Aug 30 18:17:41.596: INFO: Pod "pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.523026595s STEP: Saw pod success Aug 30 18:17:41.596: INFO: Pod "pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f" satisfied condition "success or failure" Aug 30 18:17:41.600: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f container secret-volume-test: STEP: delete the pod Aug 30 18:17:41.633: INFO: Waiting for pod pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f to disappear Aug 30 18:17:41.663: INFO: Pod pod-secrets-c7353989-3fcd-4534-afc5-3a8c656f115f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:17:41.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6662" for this suite. Aug 30 18:17:47.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:17:47.846: INFO: namespace secrets-6662 deletion completed in 6.172664456s • [SLOW TEST:12.877 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:17:47.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 30 18:17:47.946: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 30 18:17:52.955: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:17:53.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-121" for this suite. Aug 30 18:17:59.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:17:59.577: INFO: namespace replication-controller-121 deletion completed in 6.524449285s • [SLOW TEST:11.729 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:17:59.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7662 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7662 STEP: Creating statefulset with conflicting port in namespace statefulset-7662 STEP: Waiting until pod test-pod will start running in namespace statefulset-7662 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7662 Aug 30 18:18:06.184: INFO: Observed stateful pod in namespace: statefulset-7662, name: ss-0, uid: a2958fba-a5ec-4652-bdae-83ae98a844ad, status phase: Pending. Waiting for statefulset controller to delete. Aug 30 18:18:06.551: INFO: Observed stateful pod in namespace: statefulset-7662, name: ss-0, uid: a2958fba-a5ec-4652-bdae-83ae98a844ad, status phase: Failed. Waiting for statefulset controller to delete. Aug 30 18:18:06.606: INFO: Observed stateful pod in namespace: statefulset-7662, name: ss-0, uid: a2958fba-a5ec-4652-bdae-83ae98a844ad, status phase: Failed. Waiting for statefulset controller to delete. Aug 30 18:18:06.613: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7662 STEP: Removing pod with conflicting port in namespace statefulset-7662 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7662 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 30 18:18:12.700: INFO: Deleting all statefulset in ns statefulset-7662 Aug 30 18:18:12.706: INFO: Scaling statefulset ss to 0 Aug 30 18:18:32.734: INFO: Waiting for statefulset status.replicas updated to 0 Aug 30 18:18:32.739: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:18:32.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7662" for this suite. Aug 30 18:18:38.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:18:38.941: INFO: namespace statefulset-7662 deletion completed in 6.149236064s • [SLOW TEST:39.363 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:18:38.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Aug 30 18:18:39.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Aug 30 18:18:40.365: INFO: stderr: "" Aug 30 18:18:40.365: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:18:40.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7827" for this suite. Aug 30 18:18:46.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:18:46.640: INFO: namespace kubectl-7827 deletion completed in 6.264835834s • [SLOW TEST:7.698 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:18:46.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 18:18:46.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66" in namespace "downward-api-8839" to be "success or failure" Aug 30 18:18:46.787: INFO: Pod "downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66": Phase="Pending", Reason="", readiness=false. Elapsed: 19.541486ms Aug 30 18:18:49.049: INFO: Pod "downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280931491s Aug 30 18:18:51.056: INFO: Pod "downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288534569s Aug 30 18:18:53.064: INFO: Pod "downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.296011971s STEP: Saw pod success Aug 30 18:18:53.064: INFO: Pod "downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66" satisfied condition "success or failure" Aug 30 18:18:53.074: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66 container client-container: STEP: delete the pod Aug 30 18:18:53.116: INFO: Waiting for pod downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66 to disappear Aug 30 18:18:53.127: INFO: Pod downwardapi-volume-f4256297-2272-4aa8-93ce-f9909ed3ae66 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:18:53.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8839" for this suite. Aug 30 18:18:59.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:18:59.319: INFO: namespace downward-api-8839 deletion completed in 6.183026096s • [SLOW TEST:12.677 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:18:59.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Aug 30 18:19:03.634: INFO: Pod pod-hostip-0d5719d6-e429-418f-b7cb-94914ba9c3a5 has hostIP: 172.18.0.5 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:19:03.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6602" for this suite. Aug 30 18:19:25.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:19:25.799: INFO: namespace pods-6602 deletion completed in 22.153297233s • [SLOW TEST:26.476 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:19:25.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 30 18:19:26.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1943' Aug 30 18:19:30.960: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 30 18:19:30.960: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Aug 30 18:19:32.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1943' Aug 30 18:19:34.240: INFO: stderr: "" Aug 30 18:19:34.240: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:19:34.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1943" for this suite. Aug 30 18:21:36.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:21:36.510: INFO: namespace kubectl-1943 deletion completed in 2m2.205622777s • [SLOW TEST:130.708 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:21:36.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 30 18:21:36.605: INFO: Waiting up to 5m0s for pod "downward-api-d675f964-3005-4197-ac7a-587431679690" in namespace "downward-api-590" to be "success or failure" Aug 30 18:21:36.614: INFO: Pod "downward-api-d675f964-3005-4197-ac7a-587431679690": Phase="Pending", Reason="", readiness=false. Elapsed: 8.825036ms Aug 30 18:21:38.621: INFO: Pod "downward-api-d675f964-3005-4197-ac7a-587431679690": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016552114s Aug 30 18:21:40.627: INFO: Pod "downward-api-d675f964-3005-4197-ac7a-587431679690": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022001765s Aug 30 18:21:42.633: INFO: Pod "downward-api-d675f964-3005-4197-ac7a-587431679690": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028655325s STEP: Saw pod success Aug 30 18:21:42.634: INFO: Pod "downward-api-d675f964-3005-4197-ac7a-587431679690" satisfied condition "success or failure" Aug 30 18:21:42.638: INFO: Trying to get logs from node iruya-worker2 pod downward-api-d675f964-3005-4197-ac7a-587431679690 container dapi-container: STEP: delete the pod Aug 30 18:21:42.674: INFO: Waiting for pod downward-api-d675f964-3005-4197-ac7a-587431679690 to disappear Aug 30 18:21:42.685: INFO: Pod downward-api-d675f964-3005-4197-ac7a-587431679690 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:21:42.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-590" for this suite. Aug 30 18:21:48.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:21:48.853: INFO: namespace downward-api-590 deletion completed in 6.15437595s • [SLOW TEST:12.341 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:21:48.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 30 18:21:48.949: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d" in namespace "downward-api-2036" to be "success or failure" Aug 30 18:21:48.968: INFO: Pod "downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.213487ms Aug 30 18:21:51.020: INFO: Pod "downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070051913s Aug 30 18:21:53.027: INFO: Pod "downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077099581s STEP: Saw pod success Aug 30 18:21:53.027: INFO: Pod "downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d" satisfied condition "success or failure" Aug 30 18:21:53.032: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d container client-container: STEP: delete the pod Aug 30 18:21:53.086: INFO: Waiting for pod downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d to disappear Aug 30 18:21:53.100: INFO: Pod downwardapi-volume-4bde6e3e-3580-4e7d-bda8-2768e44ab82d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:21:53.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2036" for this suite. Aug 30 18:21:59.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:21:59.278: INFO: namespace downward-api-2036 deletion completed in 6.170419103s • [SLOW TEST:10.424 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:21:59.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:22:03.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5620" for this suite. Aug 30 18:22:09.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:22:09.664: INFO: namespace emptydir-wrapper-5620 deletion completed in 6.169322691s • [SLOW TEST:10.382 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:22:09.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 30 18:22:14.123: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:22:14.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1477" for this suite. Aug 30 18:22:20.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:22:20.300: INFO: namespace container-runtime-1477 deletion completed in 6.1463015s • [SLOW TEST:10.635 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:22:20.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 18:22:24.602: INFO: Waiting up to 5m0s for pod "client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae" in namespace "pods-4721" to be "success or failure" Aug 30 18:22:24.653: INFO: Pod "client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae": Phase="Pending", Reason="", readiness=false. Elapsed: 51.141994ms Aug 30 18:22:26.660: INFO: Pod "client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057870309s Aug 30 18:22:28.667: INFO: Pod "client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae": Phase="Running", Reason="", readiness=true. Elapsed: 4.064885654s Aug 30 18:22:30.675: INFO: Pod "client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073064942s STEP: Saw pod success Aug 30 18:22:30.675: INFO: Pod "client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae" satisfied condition "success or failure" Aug 30 18:22:30.682: INFO: Trying to get logs from node iruya-worker pod client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae container env3cont: STEP: delete the pod Aug 30 18:22:30.725: INFO: Waiting for pod client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae to disappear Aug 30 18:22:30.743: INFO: Pod client-envvars-5f3a8ecd-2f10-4bd1-9e3c-a12a26c89fae no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:22:30.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4721" for this suite. Aug 30 18:23:16.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:23:16.935: INFO: namespace pods-4721 deletion completed in 46.182149928s • [SLOW TEST:56.634 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:23:16.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-13c4549c-3daf-4af4-8b8b-a79758fe3aa5 STEP: Creating configMap with name cm-test-opt-upd-4c884d30-21fa-430b-9100-84b36c80ea4d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-13c4549c-3daf-4af4-8b8b-a79758fe3aa5 STEP: Updating configmap cm-test-opt-upd-4c884d30-21fa-430b-9100-84b36c80ea4d STEP: Creating configMap with name cm-test-opt-create-f9c83fa4-bc00-43c9-8888-4ef6883340fa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:24:35.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8556" for this suite. Aug 30 18:24:59.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:24:59.950: INFO: namespace projected-8556 deletion completed in 24.281479957s • [SLOW TEST:103.010 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:24:59.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-c4e60f9c-f6eb-45ec-b4cc-95114b2e3a77 STEP: Creating a pod to test consume secrets Aug 30 18:25:00.134: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea" in namespace "projected-6708" to be "success or failure" Aug 30 18:25:00.201: INFO: Pod "pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 67.066203ms Aug 30 18:25:02.289: INFO: Pod "pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154668289s Aug 30 18:25:04.296: INFO: Pod "pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161652348s STEP: Saw pod success Aug 30 18:25:04.296: INFO: Pod "pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea" satisfied condition "success or failure" Aug 30 18:25:04.486: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea container projected-secret-volume-test: STEP: delete the pod Aug 30 18:25:04.533: INFO: Waiting for pod pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea to disappear Aug 30 18:25:04.561: INFO: Pod pod-projected-secrets-56939fd5-18e3-486a-a9ae-f69724c0c5ea no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:25:04.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6708" for this suite. Aug 30 18:25:10.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:25:10.793: INFO: namespace projected-6708 deletion completed in 6.2244691s • [SLOW TEST:10.841 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:25:10.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 30 18:25:10.914: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 30 18:25:15.921: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 30 18:25:15.921: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 30 18:25:22.145: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4382,SelfLink:/apis/apps/v1/namespaces/deployment-4382/deployments/test-cleanup-deployment,UID:d670b182-47c6-42dc-bccb-b05cd04319ef,ResourceVersion:4080468,Generation:1,CreationTimestamp:2020-08-30 18:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-30 18:25:16 +0000 UTC 2020-08-30 18:25:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-30 18:25:20 +0000 UTC 2020-08-30 18:25:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 30 18:25:22.154: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4382,SelfLink:/apis/apps/v1/namespaces/deployment-4382/replicasets/test-cleanup-deployment-55bbcbc84c,UID:2d3830aa-c299-473e-b521-02d26cae3847,ResourceVersion:4080456,Generation:1,CreationTimestamp:2020-08-30 18:25:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d670b182-47c6-42dc-bccb-b05cd04319ef 0x4002fbd2f7 0x4002fbd2f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 30 18:25:22.162: INFO: Pod "test-cleanup-deployment-55bbcbc84c-f4kn4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-f4kn4,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4382,SelfLink:/api/v1/namespaces/deployment-4382/pods/test-cleanup-deployment-55bbcbc84c-f4kn4,UID:8f5e95ac-30b6-47d0-a4b8-aa7a8869de7a,ResourceVersion:4080455,Generation:0,CreationTimestamp:2020-08-30 18:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 2d3830aa-c299-473e-b521-02d26cae3847 0x400203dca7 0x400203dca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jp4fp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jp4fp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jp4fp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400203dd20} {node.kubernetes.io/unreachable Exists NoExecute 0x400203dd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:25:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:25:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:25:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-30 18:25:16 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.21,StartTime:2020-08-30 18:25:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-30 18:25:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://dffb064eadd423c62a3b76113c2d2a2a06596678ada8fdbda8bba56b9f118548}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:25:22.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4382" for this suite. Aug 30 18:25:28.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:25:28.442: INFO: namespace deployment-4382 deletion completed in 6.270934929s • [SLOW TEST:17.647 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:25:28.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-lscz STEP: Creating a pod to test atomic-volume-subpath Aug 30 18:25:28.890: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lscz" in namespace "subpath-672" to be "success or failure" Aug 30 18:25:28.930: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Pending", Reason="", readiness=false. Elapsed: 39.838992ms Aug 30 18:25:31.023: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133075887s Aug 30 18:25:33.030: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 4.139982583s Aug 30 18:25:35.036: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 6.146192458s Aug 30 18:25:37.078: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 8.188290237s Aug 30 18:25:39.086: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 10.195459771s Aug 30 18:25:41.092: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 12.20219086s Aug 30 18:25:43.099: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 14.209196803s Aug 30 18:25:45.173: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 16.282980006s Aug 30 18:25:47.179: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 18.289159382s Aug 30 18:25:49.186: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 20.295671241s Aug 30 18:25:51.193: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 22.302623193s Aug 30 18:25:53.200: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Running", Reason="", readiness=true. Elapsed: 24.309467839s Aug 30 18:25:55.207: INFO: Pod "pod-subpath-test-secret-lscz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.316473403s STEP: Saw pod success Aug 30 18:25:55.207: INFO: Pod "pod-subpath-test-secret-lscz" satisfied condition "success or failure" Aug 30 18:25:55.210: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-lscz container test-container-subpath-secret-lscz: STEP: delete the pod Aug 30 18:25:55.247: INFO: Waiting for pod pod-subpath-test-secret-lscz to disappear Aug 30 18:25:55.281: INFO: Pod pod-subpath-test-secret-lscz no longer exists STEP: Deleting pod pod-subpath-test-secret-lscz Aug 30 18:25:55.281: INFO: Deleting pod "pod-subpath-test-secret-lscz" in namespace "subpath-672" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:25:55.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-672" for this suite. Aug 30 18:26:01.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:26:01.481: INFO: namespace subpath-672 deletion completed in 6.191374015s • [SLOW TEST:33.039 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:26:01.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8d980e9a-ea10-45c1-852d-79ee4f6c387d STEP: Creating a pod to test consume configMaps Aug 30 18:26:01.571: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139" in namespace "projected-2241" to be "success or failure" Aug 30 18:26:01.623: INFO: Pod "pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139": Phase="Pending", Reason="", readiness=false. Elapsed: 52.352333ms Aug 30 18:26:03.816: INFO: Pod "pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245071772s Aug 30 18:26:05.824: INFO: Pod "pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252524471s Aug 30 18:26:07.831: INFO: Pod "pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.259745019s STEP: Saw pod success Aug 30 18:26:07.831: INFO: Pod "pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139" satisfied condition "success or failure" Aug 30 18:26:07.836: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139 container projected-configmap-volume-test: STEP: delete the pod Aug 30 18:26:07.895: INFO: Waiting for pod pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139 to disappear Aug 30 18:26:07.903: INFO: Pod pod-projected-configmaps-023dcde0-4aff-43af-b25b-175f07fed139 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:26:07.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2241" for this suite. Aug 30 18:26:13.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:26:14.072: INFO: namespace projected-2241 deletion completed in 6.161123469s • [SLOW TEST:12.590 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 30 18:26:14.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d7b0a506-9ed5-4352-84e9-7508ed35c632 STEP: Creating a pod to test consume secrets Aug 30 18:26:14.187: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440" in namespace "projected-5691" to be "success or failure" Aug 30 18:26:14.196: INFO: Pod "pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523968ms Aug 30 18:26:16.203: INFO: Pod "pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015652935s Aug 30 18:26:18.211: INFO: Pod "pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440": Phase="Running", Reason="", readiness=true. Elapsed: 4.023433178s Aug 30 18:26:20.218: INFO: Pod "pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029909787s STEP: Saw pod success Aug 30 18:26:20.218: INFO: Pod "pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440" satisfied condition "success or failure" Aug 30 18:26:20.222: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440 container projected-secret-volume-test: STEP: delete the pod Aug 30 18:26:20.278: INFO: Waiting for pod pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440 to disappear Aug 30 18:26:20.311: INFO: Pod pod-projected-secrets-132a261f-b6e3-4b96-b739-eb114debd440 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 30 18:26:20.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5691" for this suite. Aug 30 18:26:26.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 30 18:26:26.494: INFO: namespace projected-5691 deletion completed in 6.172859106s • [SLOW TEST:12.419 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 30 18:26:26.500: INFO: Running AfterSuite actions on all nodes Aug 30 18:26:26.501: INFO: Running AfterSuite actions on node 1 Aug 30 18:26:26.502: INFO: Skipping dumping logs from cluster Ran 215 of 4413 Specs in 7085.866 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped PASS