I0818 23:52:36.363380 7 e2e.go:243] Starting e2e run "ca6273c1-46a0-431c-a61b-38060cf317b2" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597794745 - Will randomize all specs Will run 215 of 4413 specs Aug 18 23:52:37.716: INFO: >>> kubeConfig: /root/.kube/config Aug 18 23:52:37.777: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 18 23:52:37.951: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 18 23:52:38.112: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 18 23:52:38.112: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 18 23:52:38.112: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 18 23:52:38.151: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 18 23:52:38.151: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 18 23:52:38.151: INFO: e2e test version: v1.15.12 Aug 18 23:52:38.155: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:52:38.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Aug 18 23:52:38.238: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 18 23:52:38.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3659' Aug 18 23:52:42.195: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 18 23:52:42.196: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Aug 18 23:52:46.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3659' Aug 18 23:52:47.620: INFO: stderr: "" Aug 18 23:52:47.620: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:52:47.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3659" for this suite. Aug 18 23:53:09.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:53:09.839: INFO: namespace kubectl-3659 deletion completed in 22.195236068s • [SLOW TEST:31.679 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:53:09.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 18 23:53:09.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298" in namespace "projected-3679" to be "success or failure" Aug 18 23:53:09.996: INFO: Pod "downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298": Phase="Pending", Reason="", readiness=false. Elapsed: 30.032354ms Aug 18 23:53:12.005: INFO: Pod "downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039559055s Aug 18 23:53:14.013: INFO: Pod "downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048002256s STEP: Saw pod success Aug 18 23:53:14.014: INFO: Pod "downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298" satisfied condition "success or failure" Aug 18 23:53:14.024: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298 container client-container: STEP: delete the pod Aug 18 23:53:14.067: INFO: Waiting for pod downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298 to disappear Aug 18 23:53:14.072: INFO: Pod downwardapi-volume-f68e32e6-b738-436f-936a-63e29b32c298 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:53:14.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3679" for this suite. Aug 18 23:53:20.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:53:20.318: INFO: namespace projected-3679 deletion completed in 6.237797062s • [SLOW TEST:10.474 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:53:20.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1d6dbfc4-eefd-4cb4-b8ea-5400fc440a35 STEP: Creating a pod to test consume secrets Aug 18 23:53:20.499: INFO: Waiting up to 5m0s for pod "pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca" in namespace "secrets-9540" to be "success or failure" Aug 18 23:53:20.504: INFO: Pod "pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.527643ms Aug 18 23:53:22.512: INFO: Pod "pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01257841s Aug 18 23:53:24.520: INFO: Pod "pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02035856s STEP: Saw pod success Aug 18 23:53:24.520: INFO: Pod "pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca" satisfied condition "success or failure" Aug 18 23:53:24.525: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca container secret-volume-test: STEP: delete the pod Aug 18 23:53:24.577: INFO: Waiting for pod pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca to disappear Aug 18 23:53:24.581: INFO: Pod pod-secrets-3dd3025d-bbe5-4af4-a8a7-e80bd70d03ca no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:53:24.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9540" for this suite. Aug 18 23:53:30.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:53:30.780: INFO: namespace secrets-9540 deletion completed in 6.189896412s • [SLOW TEST:10.460 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:53:30.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 18 23:53:31.169: INFO: Waiting up to 5m0s for pod "pod-f916b7b2-2608-4ced-ac30-bde403850f1f" in namespace "emptydir-1861" to be "success or failure" Aug 18 23:53:31.305: INFO: Pod "pod-f916b7b2-2608-4ced-ac30-bde403850f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 135.926784ms Aug 18 23:53:33.312: INFO: Pod "pod-f916b7b2-2608-4ced-ac30-bde403850f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142684369s Aug 18 23:53:35.319: INFO: Pod "pod-f916b7b2-2608-4ced-ac30-bde403850f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149663593s Aug 18 23:53:37.338: INFO: Pod "pod-f916b7b2-2608-4ced-ac30-bde403850f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168409619s STEP: Saw pod success Aug 18 23:53:37.338: INFO: Pod "pod-f916b7b2-2608-4ced-ac30-bde403850f1f" satisfied condition "success or failure" Aug 18 23:53:37.495: INFO: Trying to get logs from node iruya-worker pod pod-f916b7b2-2608-4ced-ac30-bde403850f1f container test-container: STEP: delete the pod Aug 18 23:53:37.678: INFO: Waiting for pod pod-f916b7b2-2608-4ced-ac30-bde403850f1f to disappear Aug 18 23:53:37.688: INFO: Pod pod-f916b7b2-2608-4ced-ac30-bde403850f1f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:53:37.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1861" for this suite. Aug 18 23:53:45.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:53:46.106: INFO: namespace emptydir-1861 deletion completed in 8.409819824s • [SLOW TEST:15.325 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:53:46.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-19fdf35f-3ef9-4935-858e-cdbc3899b9d9 STEP: Creating a pod to test consume secrets Aug 18 23:53:46.377: INFO: Waiting up to 5m0s for pod "pod-secrets-8766a333-c243-4c90-895e-d7de743361df" in namespace "secrets-9821" to be "success or failure" Aug 18 23:53:46.495: INFO: Pod "pod-secrets-8766a333-c243-4c90-895e-d7de743361df": Phase="Pending", Reason="", readiness=false. Elapsed: 116.968403ms Aug 18 23:53:48.501: INFO: Pod "pod-secrets-8766a333-c243-4c90-895e-d7de743361df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122951716s Aug 18 23:53:50.507: INFO: Pod "pod-secrets-8766a333-c243-4c90-895e-d7de743361df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12925999s Aug 18 23:53:52.512: INFO: Pod "pod-secrets-8766a333-c243-4c90-895e-d7de743361df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134259698s STEP: Saw pod success Aug 18 23:53:52.512: INFO: Pod "pod-secrets-8766a333-c243-4c90-895e-d7de743361df" satisfied condition "success or failure" Aug 18 23:53:52.516: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-8766a333-c243-4c90-895e-d7de743361df container secret-volume-test: STEP: delete the pod Aug 18 23:53:52.828: INFO: Waiting for pod pod-secrets-8766a333-c243-4c90-895e-d7de743361df to disappear Aug 18 23:53:52.876: INFO: Pod pod-secrets-8766a333-c243-4c90-895e-d7de743361df no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:53:52.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9821" for this suite. Aug 18 23:54:00.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:54:01.056: INFO: namespace secrets-9821 deletion completed in 8.167463841s STEP: Destroying namespace "secret-namespace-3098" for this suite. Aug 18 23:54:07.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:54:07.233: INFO: namespace secret-namespace-3098 deletion completed in 6.177210306s • [SLOW TEST:21.126 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:54:07.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-26aa4064-c497-4052-9895-e1bfebf39a85 STEP: Creating a pod to test consume configMaps Aug 18 23:54:07.349: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770" in namespace "projected-8130" to be "success or failure" Aug 18 23:54:07.365: INFO: Pod "pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770": Phase="Pending", Reason="", readiness=false. Elapsed: 15.051907ms Aug 18 23:54:09.371: INFO: Pod "pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021759366s Aug 18 23:54:11.378: INFO: Pod "pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028388811s STEP: Saw pod success Aug 18 23:54:11.378: INFO: Pod "pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770" satisfied condition "success or failure" Aug 18 23:54:11.383: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770 container projected-configmap-volume-test: STEP: delete the pod Aug 18 23:54:11.440: INFO: Waiting for pod pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770 to disappear Aug 18 23:54:11.451: INFO: Pod pod-projected-configmaps-dfaec6c7-165f-4696-a0ea-d6c109ecb770 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:54:11.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8130" for this suite. Aug 18 23:54:19.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:54:19.600: INFO: namespace projected-8130 deletion completed in 8.139808975s • [SLOW TEST:12.364 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:54:19.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 18 23:54:23.763: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:54:23.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2132" for this suite. Aug 18 23:54:29.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:54:30.005: INFO: namespace container-runtime-2132 deletion completed in 6.194437379s • [SLOW TEST:10.403 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:54:30.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-snhpn in namespace proxy-4762 I0818 23:54:30.174226 7 runners.go:180] Created replication controller with name: proxy-service-snhpn, namespace: proxy-4762, replica count: 1 I0818 23:54:31.229147 7 runners.go:180] proxy-service-snhpn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0818 23:54:32.230742 7 runners.go:180] proxy-service-snhpn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0818 23:54:33.231923 7 runners.go:180] proxy-service-snhpn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0818 23:54:34.232653 7 runners.go:180] proxy-service-snhpn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0818 23:54:35.233443 7 runners.go:180] proxy-service-snhpn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0818 23:54:36.234748 7 runners.go:180] proxy-service-snhpn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0818 23:54:37.235492 7 runners.go:180] proxy-service-snhpn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 18 23:54:37.248: INFO: setup took 7.178523605s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 18 23:54:37.258: INFO: (0) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 9.109467ms) Aug 18 23:54:37.259: INFO: (0) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 10.046569ms) Aug 18 23:54:37.259: INFO: (0) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 10.169248ms) Aug 18 23:54:37.259: INFO: (0) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 10.218478ms) Aug 18 23:54:37.260: INFO: (0) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 10.828504ms) Aug 18 23:54:37.261: INFO: (0) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 11.861922ms) Aug 18 23:54:37.261: INFO: (0) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 12.223253ms) Aug 18 23:54:37.261: INFO: (0) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 12.426085ms) Aug 18 23:54:37.262: INFO: (0) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 12.555575ms) Aug 18 23:54:37.262: INFO: (0) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 12.550916ms) Aug 18 23:54:37.262: INFO: (0) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 13.002069ms) Aug 18 23:54:37.265: INFO: (0) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 15.331419ms) Aug 18 23:54:37.267: INFO: (0) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 17.364248ms) Aug 18 23:54:37.267: INFO: (0) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 17.844735ms) Aug 18 23:54:37.270: INFO: (0) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 7.017999ms) Aug 18 23:54:37.277: INFO: (1) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 7.140871ms) Aug 18 23:54:37.278: INFO: (1) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 7.131282ms) Aug 18 23:54:37.278: INFO: (1) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 7.481115ms) Aug 18 23:54:37.278: INFO: (1) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 7.863074ms) Aug 18 23:54:37.278: INFO: (1) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 8.055235ms) Aug 18 23:54:37.278: INFO: (1) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 7.973659ms) Aug 18 23:54:37.278: INFO: (1) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 8.186416ms) Aug 18 23:54:37.278: INFO: (1) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 8.044109ms) Aug 18 23:54:37.279: INFO: (1) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 8.518409ms) Aug 18 23:54:37.279: INFO: (1) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 8.696306ms) Aug 18 23:54:37.279: INFO: (1) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 8.622704ms) Aug 18 23:54:37.283: INFO: (2) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 3.515594ms) Aug 18 23:54:37.284: INFO: (2) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 4.56476ms) Aug 18 23:54:37.284: INFO: (2) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 5.025224ms) Aug 18 23:54:37.286: INFO: (2) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 6.603886ms) Aug 18 23:54:37.286: INFO: (2) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 7.037446ms) Aug 18 23:54:37.286: INFO: (2) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 7.15917ms) Aug 18 23:54:37.286: INFO: (2) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 7.194758ms) Aug 18 23:54:37.287: INFO: (2) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test<... (200; 7.410566ms) Aug 18 23:54:37.296: INFO: (3) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 8.1396ms) Aug 18 23:54:37.296: INFO: (3) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 7.891686ms) Aug 18 23:54:37.297: INFO: (3) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 8.227456ms) Aug 18 23:54:37.301: INFO: (4) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 4.769013ms) Aug 18 23:54:37.302: INFO: (4) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 4.584826ms) Aug 18 23:54:37.303: INFO: (4) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 5.909114ms) Aug 18 23:54:37.303: INFO: (4) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 5.991109ms) Aug 18 23:54:37.303: INFO: (4) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.231907ms) Aug 18 23:54:37.303: INFO: (4) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 6.356189ms) Aug 18 23:54:37.303: INFO: (4) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 6.510859ms) Aug 18 23:54:37.303: INFO: (4) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 6.458998ms) Aug 18 23:54:37.304: INFO: (4) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.408649ms) Aug 18 23:54:37.304: INFO: (4) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 6.558919ms) Aug 18 23:54:37.304: INFO: (4) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 6.961353ms) Aug 18 23:54:37.304: INFO: (4) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 7.176158ms) Aug 18 23:54:37.304: INFO: (4) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 7.062203ms) Aug 18 23:54:37.304: INFO: (4) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 7.357005ms) Aug 18 23:54:37.305: INFO: (4) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 4.179797ms) Aug 18 23:54:37.309: INFO: (5) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 4.215693ms) Aug 18 23:54:37.310: INFO: (5) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 4.464937ms) Aug 18 23:54:37.310: INFO: (5) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 5.040793ms) Aug 18 23:54:37.311: INFO: (5) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 5.551196ms) Aug 18 23:54:37.311: INFO: (5) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 5.736474ms) Aug 18 23:54:37.311: INFO: (5) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 5.865553ms) Aug 18 23:54:37.311: INFO: (5) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test<... (200; 6.324147ms) Aug 18 23:54:37.312: INFO: (5) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 6.237488ms) Aug 18 23:54:37.312: INFO: (5) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 6.523689ms) Aug 18 23:54:37.312: INFO: (5) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 6.687746ms) Aug 18 23:54:37.312: INFO: (5) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 6.957914ms) Aug 18 23:54:37.312: INFO: (5) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.726116ms) Aug 18 23:54:37.312: INFO: (5) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 7.043664ms) Aug 18 23:54:37.312: INFO: (5) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 7.140079ms) Aug 18 23:54:37.316: INFO: (6) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 3.841162ms) Aug 18 23:54:37.317: INFO: (6) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 4.157036ms) Aug 18 23:54:37.317: INFO: (6) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 4.336231ms) Aug 18 23:54:37.317: INFO: (6) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 4.369624ms) Aug 18 23:54:37.317: INFO: (6) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test<... (200; 4.604749ms) Aug 18 23:54:37.317: INFO: (6) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 5.078017ms) Aug 18 23:54:37.317: INFO: (6) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 5.02811ms) Aug 18 23:54:37.318: INFO: (6) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 5.01892ms) Aug 18 23:54:37.318: INFO: (6) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 5.577008ms) Aug 18 23:54:37.318: INFO: (6) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 5.791751ms) Aug 18 23:54:37.319: INFO: (6) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.103585ms) Aug 18 23:54:37.319: INFO: (6) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 6.317065ms) Aug 18 23:54:37.319: INFO: (6) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 6.895375ms) Aug 18 23:54:37.320: INFO: (6) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 7.124654ms) Aug 18 23:54:37.320: INFO: (6) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 7.019159ms) Aug 18 23:54:37.324: INFO: (7) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 4.179858ms) Aug 18 23:54:37.327: INFO: (7) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.342153ms) Aug 18 23:54:37.327: INFO: (7) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 6.824813ms) Aug 18 23:54:37.328: INFO: (7) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 7.338766ms) Aug 18 23:54:37.328: INFO: (7) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 7.263636ms) Aug 18 23:54:37.328: INFO: (7) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 7.433913ms) Aug 18 23:54:37.328: INFO: (7) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 7.395055ms) Aug 18 23:54:37.328: INFO: (7) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 7.499955ms) Aug 18 23:54:37.328: INFO: (7) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 7.905214ms) Aug 18 23:54:37.329: INFO: (7) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 7.872007ms) Aug 18 23:54:37.329: INFO: (7) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 8.021149ms) Aug 18 23:54:37.329: INFO: (7) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 8.318658ms) Aug 18 23:54:37.329: INFO: (7) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 8.544191ms) Aug 18 23:54:37.329: INFO: (7) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 8.890448ms) Aug 18 23:54:37.329: INFO: (7) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 8.698169ms) Aug 18 23:54:37.332: INFO: (8) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 2.343792ms) Aug 18 23:54:37.334: INFO: (8) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 3.432967ms) Aug 18 23:54:37.334: INFO: (8) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 4.494409ms) Aug 18 23:54:37.335: INFO: (8) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 4.166465ms) Aug 18 23:54:37.335: INFO: (8) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 3.504247ms) Aug 18 23:54:37.335: INFO: (8) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test (200; 5.481825ms) Aug 18 23:54:37.339: INFO: (8) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.487285ms) Aug 18 23:54:37.339: INFO: (8) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 5.905556ms) Aug 18 23:54:37.339: INFO: (8) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 5.920298ms) Aug 18 23:54:37.340: INFO: (8) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.642744ms) Aug 18 23:54:37.342: INFO: (8) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 9.514662ms) Aug 18 23:54:37.347: INFO: (9) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test (200; 5.145816ms) Aug 18 23:54:37.348: INFO: (9) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 5.635191ms) Aug 18 23:54:37.348: INFO: (9) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 5.697317ms) Aug 18 23:54:37.348: INFO: (9) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 5.674778ms) Aug 18 23:54:37.348: INFO: (9) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 6.129806ms) Aug 18 23:54:37.348: INFO: (9) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 5.92031ms) Aug 18 23:54:37.349: INFO: (9) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.074657ms) Aug 18 23:54:37.349: INFO: (9) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 5.987013ms) Aug 18 23:54:37.349: INFO: (9) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.311716ms) Aug 18 23:54:37.349: INFO: (9) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 6.19934ms) Aug 18 23:54:37.349: INFO: (9) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 6.331845ms) Aug 18 23:54:37.349: INFO: (9) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 6.985391ms) Aug 18 23:54:37.349: INFO: (9) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 6.903084ms) Aug 18 23:54:37.350: INFO: (9) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.506437ms) Aug 18 23:54:37.353: INFO: (10) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 2.997471ms) Aug 18 23:54:37.354: INFO: (10) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 4.527272ms) Aug 18 23:54:37.355: INFO: (10) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 5.619993ms) Aug 18 23:54:37.356: INFO: (10) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 5.865701ms) Aug 18 23:54:37.356: INFO: (10) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 6.031043ms) Aug 18 23:54:37.356: INFO: (10) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.144411ms) Aug 18 23:54:37.356: INFO: (10) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test<... (200; 6.406885ms) Aug 18 23:54:37.356: INFO: (10) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 6.762458ms) Aug 18 23:54:37.357: INFO: (10) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 6.806711ms) Aug 18 23:54:37.357: INFO: (10) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 6.732236ms) Aug 18 23:54:37.357: INFO: (10) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 6.736989ms) Aug 18 23:54:37.357: INFO: (10) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.835432ms) Aug 18 23:54:37.357: INFO: (10) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 7.2307ms) Aug 18 23:54:37.357: INFO: (10) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 7.518276ms) Aug 18 23:54:37.358: INFO: (10) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 7.57388ms) Aug 18 23:54:37.362: INFO: (11) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 3.855338ms) Aug 18 23:54:37.363: INFO: (11) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 4.875646ms) Aug 18 23:54:37.363: INFO: (11) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 4.991711ms) Aug 18 23:54:37.363: INFO: (11) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 5.069691ms) Aug 18 23:54:37.363: INFO: (11) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 5.378165ms) Aug 18 23:54:37.363: INFO: (11) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 5.613569ms) Aug 18 23:54:37.363: INFO: (11) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 5.672048ms) Aug 18 23:54:37.364: INFO: (11) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 5.489172ms) Aug 18 23:54:37.365: INFO: (11) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 6.615789ms) Aug 18 23:54:37.365: INFO: (11) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 6.882674ms) Aug 18 23:54:37.365: INFO: (11) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.843704ms) Aug 18 23:54:37.365: INFO: (11) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 7.047722ms) Aug 18 23:54:37.365: INFO: (11) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.955621ms) Aug 18 23:54:37.365: INFO: (11) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.978912ms) Aug 18 23:54:37.365: INFO: (11) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 7.224701ms) Aug 18 23:54:37.369: INFO: (12) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 3.501381ms) Aug 18 23:54:37.371: INFO: (12) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 5.252212ms) Aug 18 23:54:37.371: INFO: (12) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 5.242684ms) Aug 18 23:54:37.371: INFO: (12) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 5.64831ms) Aug 18 23:54:37.371: INFO: (12) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 5.82823ms) Aug 18 23:54:37.371: INFO: (12) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 6.022743ms) Aug 18 23:54:37.371: INFO: (12) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 5.888258ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 6.261775ms) Aug 18 23:54:37.371: INFO: (12) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 6.205668ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 6.265267ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.362756ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 6.723498ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.689028ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 6.589409ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 6.900257ms) Aug 18 23:54:37.372: INFO: (12) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test<... (200; 3.072497ms) Aug 18 23:54:37.376: INFO: (13) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 3.49379ms) Aug 18 23:54:37.377: INFO: (13) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 5.02119ms) Aug 18 23:54:37.378: INFO: (13) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 5.380748ms) Aug 18 23:54:37.378: INFO: (13) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 5.428366ms) Aug 18 23:54:37.378: INFO: (13) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 5.475735ms) Aug 18 23:54:37.378: INFO: (13) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 5.566613ms) Aug 18 23:54:37.378: INFO: (13) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 5.522528ms) Aug 18 23:54:37.379: INFO: (13) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 6.440464ms) Aug 18 23:54:37.379: INFO: (13) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.393074ms) Aug 18 23:54:37.379: INFO: (13) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 6.699643ms) Aug 18 23:54:37.379: INFO: (13) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 6.671286ms) Aug 18 23:54:37.380: INFO: (13) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.763014ms) Aug 18 23:54:37.380: INFO: (13) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 7.034126ms) Aug 18 23:54:37.383: INFO: (14) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 3.393458ms) Aug 18 23:54:37.383: INFO: (14) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 3.529385ms) Aug 18 23:54:37.384: INFO: (14) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 3.68849ms) Aug 18 23:54:37.386: INFO: (14) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 6.064763ms) Aug 18 23:54:37.386: INFO: (14) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 5.896788ms) Aug 18 23:54:37.386: INFO: (14) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.540336ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 6.514455ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 6.90694ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 7.078979ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.984276ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 7.158556ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 7.257814ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 7.088753ms) Aug 18 23:54:37.387: INFO: (14) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 7.370839ms) Aug 18 23:54:37.388: INFO: (14) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test<... (200; 3.668806ms) Aug 18 23:54:37.392: INFO: (15) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 3.321907ms) Aug 18 23:54:37.392: INFO: (15) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 3.358916ms) Aug 18 23:54:37.393: INFO: (15) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 4.023501ms) Aug 18 23:54:37.393: INFO: (15) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 4.193536ms) Aug 18 23:54:37.394: INFO: (15) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 4.448707ms) Aug 18 23:54:37.394: INFO: (15) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 5.229893ms) Aug 18 23:54:37.394: INFO: (15) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 5.009385ms) Aug 18 23:54:37.394: INFO: (15) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 4.77816ms) Aug 18 23:54:37.394: INFO: (15) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 5.04091ms) Aug 18 23:54:37.394: INFO: (15) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 5.222919ms) Aug 18 23:54:37.395: INFO: (15) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 5.975966ms) Aug 18 23:54:37.398: INFO: (16) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 2.8395ms) Aug 18 23:54:37.398: INFO: (16) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 2.825916ms) Aug 18 23:54:37.400: INFO: (16) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 4.967812ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 6.13637ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname2/proxy/: bar (200; 6.054859ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.067021ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: test (200; 6.060232ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 6.336409ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 6.073092ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname2/proxy/: tls qux (200; 6.089007ms) Aug 18 23:54:37.401: INFO: (16) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 6.456757ms) Aug 18 23:54:37.556: INFO: (16) /api/v1/namespaces/proxy-4762/services/proxy-service-snhpn:portname1/proxy/: foo (200; 160.876624ms) Aug 18 23:54:37.556: INFO: (16) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 160.679379ms) Aug 18 23:54:37.620: INFO: (16) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 225.008975ms) Aug 18 23:54:37.620: INFO: (16) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 225.025346ms) Aug 18 23:54:38.126: INFO: (17) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 504.680499ms) Aug 18 23:54:38.127: INFO: (17) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:162/proxy/: bar (200; 505.671883ms) Aug 18 23:54:38.127: INFO: (17) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 506.08016ms) Aug 18 23:54:38.127: INFO: (17) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:162/proxy/: bar (200; 506.173021ms) Aug 18 23:54:38.127: INFO: (17) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:1080/proxy/: ... (200; 506.661103ms) Aug 18 23:54:38.128: INFO: (17) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:460/proxy/: tls baz (200; 507.297861ms) Aug 18 23:54:38.128: INFO: (17) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 507.470078ms) Aug 18 23:54:38.128: INFO: (17) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 507.602147ms) Aug 18 23:54:38.129: INFO: (17) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 507.44108ms) Aug 18 23:54:38.129: INFO: (17) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:462/proxy/: tls qux (200; 507.846912ms) Aug 18 23:54:38.129: INFO: (17) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 275.536595ms) Aug 18 23:54:38.406: INFO: (18) /api/v1/namespaces/proxy-4762/pods/http:proxy-service-snhpn-86ffg:160/proxy/: foo (200; 275.639374ms) Aug 18 23:54:38.407: INFO: (18) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 276.369699ms) Aug 18 23:54:38.408: INFO: (18) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 276.836453ms) Aug 18 23:54:38.408: INFO: (18) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 276.738806ms) Aug 18 23:54:38.408: INFO: (18) /api/v1/namespaces/proxy-4762/pods/https:proxy-service-snhpn-86ffg:443/proxy/: ... (200; 40.401366ms) Aug 18 23:54:38.452: INFO: (19) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:1080/proxy/: test<... (200; 40.945357ms) Aug 18 23:54:38.452: INFO: (19) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname1/proxy/: foo (200; 41.072932ms) Aug 18 23:54:38.452: INFO: (19) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg:160/proxy/: foo (200; 40.803979ms) Aug 18 23:54:38.452: INFO: (19) /api/v1/namespaces/proxy-4762/services/https:proxy-service-snhpn:tlsportname1/proxy/: tls baz (200; 40.69368ms) Aug 18 23:54:38.452: INFO: (19) /api/v1/namespaces/proxy-4762/services/http:proxy-service-snhpn:portname2/proxy/: bar (200; 41.47947ms) Aug 18 23:54:38.452: INFO: (19) /api/v1/namespaces/proxy-4762/pods/proxy-service-snhpn-86ffg/proxy/: test (200; 41.549597ms) STEP: deleting ReplicationController proxy-service-snhpn in namespace proxy-4762, will wait for the garbage collector to delete the pods Aug 18 23:54:38.616: INFO: Deleting ReplicationController proxy-service-snhpn took: 107.161468ms Aug 18 23:54:38.918: INFO: Terminating ReplicationController proxy-service-snhpn pods took: 301.647927ms [AfterEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:54:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4762" for this suite. Aug 18 23:54:59.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:54:59.590: INFO: namespace proxy-4762 deletion completed in 6.159515583s • [SLOW TEST:29.583 seconds] [sig-network] Proxy /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:54:59.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 18 23:54:59.687: INFO: Waiting up to 5m0s for pod "pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b" in namespace "emptydir-4984" to be "success or failure" Aug 18 23:54:59.728: INFO: Pod "pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.64457ms Aug 18 23:55:01.904: INFO: Pod "pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216508533s Aug 18 23:55:03.911: INFO: Pod "pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224065216s STEP: Saw pod success Aug 18 23:55:03.912: INFO: Pod "pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b" satisfied condition "success or failure" Aug 18 23:55:03.917: INFO: Trying to get logs from node iruya-worker pod pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b container test-container: STEP: delete the pod Aug 18 23:55:04.143: INFO: Waiting for pod pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b to disappear Aug 18 23:55:04.231: INFO: Pod pod-a1b881df-520d-4f47-ab5f-b9c65ee5ff2b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:55:04.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4984" for this suite. Aug 18 23:55:10.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:55:10.491: INFO: namespace emptydir-4984 deletion completed in 6.249724041s • [SLOW TEST:10.898 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:55:10.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 18 23:55:10.580: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4" in namespace "downward-api-5187" to be "success or failure" Aug 18 23:55:10.610: INFO: Pod "downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.878162ms Aug 18 23:55:12.618: INFO: Pod "downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037378016s Aug 18 23:55:14.742: INFO: Pod "downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161608752s Aug 18 23:55:16.749: INFO: Pod "downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168847394s STEP: Saw pod success Aug 18 23:55:16.749: INFO: Pod "downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4" satisfied condition "success or failure" Aug 18 23:55:16.754: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4 container client-container: STEP: delete the pod Aug 18 23:55:16.815: INFO: Waiting for pod downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4 to disappear Aug 18 23:55:16.819: INFO: Pod downwardapi-volume-ecb8b17c-910b-46e9-a96d-747ab9a2dde4 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:55:16.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5187" for this suite. Aug 18 23:55:22.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:55:22.999: INFO: namespace downward-api-5187 deletion completed in 6.169738574s • [SLOW TEST:12.507 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:55:23.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 18 23:55:33.151: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:33.172: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:35.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:35.180: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:37.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:37.180: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:39.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:39.179: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:41.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:41.180: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:43.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:43.178: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:45.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:45.179: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:47.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:47.258: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:49.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:49.178: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:51.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:51.179: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:53.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:53.179: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:55.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:55.180: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:57.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:57.180: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:55:59.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:55:59.179: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:56:01.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:56:01.180: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:56:03.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:56:03.180: INFO: Pod pod-with-poststart-exec-hook still exists Aug 18 23:56:05.172: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 18 23:56:05.179: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:56:05.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7865" for this suite. Aug 18 23:56:29.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:56:29.641: INFO: namespace container-lifecycle-hook-7865 deletion completed in 24.449516549s • [SLOW TEST:66.641 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:56:29.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 18 23:56:29.817: INFO: Waiting up to 5m0s for pod "pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c" in namespace "emptydir-4451" to be "success or failure" Aug 18 23:56:29.821: INFO: Pod "pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.887391ms Aug 18 23:56:31.827: INFO: Pod "pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010017553s Aug 18 23:56:33.834: INFO: Pod "pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016245705s STEP: Saw pod success Aug 18 23:56:33.834: INFO: Pod "pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c" satisfied condition "success or failure" Aug 18 23:56:33.837: INFO: Trying to get logs from node iruya-worker2 pod pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c container test-container: STEP: delete the pod Aug 18 23:56:33.881: INFO: Waiting for pod pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c to disappear Aug 18 23:56:33.886: INFO: Pod pod-b5a8ff89-adb5-405b-b3b9-19d220e1cc5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:56:33.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4451" for this suite. Aug 18 23:56:39.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:56:40.093: INFO: namespace emptydir-4451 deletion completed in 6.199374903s • [SLOW TEST:10.449 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:56:40.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 18 23:56:40.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11" in namespace "downward-api-9775" to be "success or failure" Aug 18 23:56:40.227: INFO: Pod "downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11": Phase="Pending", Reason="", readiness=false. Elapsed: 13.927107ms Aug 18 23:56:42.235: INFO: Pod "downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02163035s Aug 18 23:56:44.241: INFO: Pod "downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11": Phase="Running", Reason="", readiness=true. Elapsed: 4.027899546s Aug 18 23:56:46.494: INFO: Pod "downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.280985786s STEP: Saw pod success Aug 18 23:56:46.495: INFO: Pod "downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11" satisfied condition "success or failure" Aug 18 23:56:46.550: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11 container client-container: STEP: delete the pod Aug 18 23:56:47.106: INFO: Waiting for pod downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11 to disappear Aug 18 23:56:47.360: INFO: Pod downwardapi-volume-82b3ebf3-962b-4bb5-8409-8928b0159b11 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:56:47.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9775" for this suite. Aug 18 23:56:55.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:56:55.551: INFO: namespace downward-api-9775 deletion completed in 8.183276303s • [SLOW TEST:15.457 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:56:55.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-f81c521b-c06a-456a-ad70-cde5f9c3f997 STEP: Creating secret with name secret-projected-all-test-volume-a8dae8c2-e975-4599-91f5-63a29705f1c6 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 18 23:56:56.033: INFO: Waiting up to 5m0s for pod "projected-volume-89889e16-6d13-4bcb-8999-cffe72976701" in namespace "projected-2645" to be "success or failure" Aug 18 23:56:56.071: INFO: Pod "projected-volume-89889e16-6d13-4bcb-8999-cffe72976701": Phase="Pending", Reason="", readiness=false. Elapsed: 38.348962ms Aug 18 23:56:58.256: INFO: Pod "projected-volume-89889e16-6d13-4bcb-8999-cffe72976701": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223007701s Aug 18 23:57:00.612: INFO: Pod "projected-volume-89889e16-6d13-4bcb-8999-cffe72976701": Phase="Running", Reason="", readiness=true. Elapsed: 4.578818147s Aug 18 23:57:03.247: INFO: Pod "projected-volume-89889e16-6d13-4bcb-8999-cffe72976701": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.214202046s STEP: Saw pod success Aug 18 23:57:03.247: INFO: Pod "projected-volume-89889e16-6d13-4bcb-8999-cffe72976701" satisfied condition "success or failure" Aug 18 23:57:03.395: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-89889e16-6d13-4bcb-8999-cffe72976701 container projected-all-volume-test: STEP: delete the pod Aug 18 23:57:03.459: INFO: Waiting for pod projected-volume-89889e16-6d13-4bcb-8999-cffe72976701 to disappear Aug 18 23:57:03.569: INFO: Pod projected-volume-89889e16-6d13-4bcb-8999-cffe72976701 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:57:03.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2645" for this suite. Aug 18 23:57:13.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:57:14.221: INFO: namespace projected-2645 deletion completed in 10.646092459s • [SLOW TEST:18.669 seconds] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:57:14.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-6165 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6165 to expose endpoints map[] Aug 18 23:57:16.468: INFO: successfully validated that service multi-endpoint-test in namespace services-6165 exposes endpoints map[] (280.134376ms elapsed) STEP: Creating pod pod1 in namespace services-6165 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6165 to expose endpoints map[pod1:[100]] Aug 18 23:57:23.948: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (7.394831784s elapsed, will retry) Aug 18 23:57:27.401: INFO: successfully validated that service multi-endpoint-test in namespace services-6165 exposes endpoints map[pod1:[100]] (10.847492421s elapsed) STEP: Creating pod pod2 in namespace services-6165 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6165 to expose endpoints map[pod1:[100] pod2:[101]] Aug 18 23:57:32.262: INFO: Unexpected endpoints: found map[37f90bb5-e88a-461e-8ea4-c3014f85cf17:[100]], expected map[pod1:[100] pod2:[101]] (4.85627605s elapsed, will retry) Aug 18 23:57:34.457: INFO: successfully validated that service multi-endpoint-test in namespace services-6165 exposes endpoints map[pod1:[100] pod2:[101]] (7.050767476s elapsed) STEP: Deleting pod pod1 in namespace services-6165 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6165 to expose endpoints map[pod2:[101]] Aug 18 23:57:34.589: INFO: successfully validated that service multi-endpoint-test in namespace services-6165 exposes endpoints map[pod2:[101]] (126.164114ms elapsed) STEP: Deleting pod pod2 in namespace services-6165 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6165 to expose endpoints map[] Aug 18 23:57:34.882: INFO: successfully validated that service multi-endpoint-test in namespace services-6165 exposes endpoints map[] (285.897282ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:57:35.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6165" for this suite. Aug 18 23:57:57.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:57:57.256: INFO: namespace services-6165 deletion completed in 22.172052457s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:43.035 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:57:57.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 18 23:57:57.354: INFO: Waiting up to 5m0s for pod "downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce" in namespace "downward-api-956" to be "success or failure" Aug 18 23:57:57.401: INFO: Pod "downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce": Phase="Pending", Reason="", readiness=false. Elapsed: 46.896912ms Aug 18 23:57:59.408: INFO: Pod "downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053767506s Aug 18 23:58:01.415: INFO: Pod "downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060674592s STEP: Saw pod success Aug 18 23:58:01.415: INFO: Pod "downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce" satisfied condition "success or failure" Aug 18 23:58:01.420: INFO: Trying to get logs from node iruya-worker pod downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce container dapi-container: STEP: delete the pod Aug 18 23:58:01.518: INFO: Waiting for pod downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce to disappear Aug 18 23:58:01.561: INFO: Pod downward-api-b6ad98ab-fa19-45a0-b0f2-39f1521d74ce no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:58:01.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-956" for this suite. Aug 18 23:58:09.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:58:10.494: INFO: namespace downward-api-956 deletion completed in 8.924140515s • [SLOW TEST:13.237 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:58:10.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9472 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9472 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9472 Aug 18 23:58:11.079: INFO: Found 0 stateful pods, waiting for 1 Aug 18 23:58:21.087: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 18 23:58:21.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9472 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 18 23:58:22.750: INFO: stderr: "I0818 23:58:22.599299 101 log.go:172] (0x40008a2840) (0x40008d68c0) Create stream\nI0818 23:58:22.602708 101 log.go:172] (0x40008a2840) (0x40008d68c0) Stream added, broadcasting: 1\nI0818 23:58:22.621565 101 log.go:172] (0x40008a2840) Reply frame received for 1\nI0818 23:58:22.622421 101 log.go:172] (0x40008a2840) (0x40008d6000) Create stream\nI0818 23:58:22.622523 101 log.go:172] (0x40008a2840) (0x40008d6000) Stream added, broadcasting: 3\nI0818 23:58:22.624473 101 log.go:172] (0x40008a2840) Reply frame received for 3\nI0818 23:58:22.624934 101 log.go:172] (0x40008a2840) (0x4000858000) Create stream\nI0818 23:58:22.625025 101 log.go:172] (0x40008a2840) (0x4000858000) Stream added, broadcasting: 5\nI0818 23:58:22.626797 101 log.go:172] (0x40008a2840) Reply frame received for 5\nI0818 23:58:22.690598 101 log.go:172] (0x40008a2840) Data frame received for 5\nI0818 23:58:22.690867 101 log.go:172] (0x4000858000) (5) Data frame handling\nI0818 23:58:22.691266 101 log.go:172] (0x4000858000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0818 23:58:22.732227 101 log.go:172] (0x40008a2840) Data frame received for 3\nI0818 23:58:22.732360 101 log.go:172] (0x40008d6000) (3) Data frame handling\nI0818 23:58:22.732463 101 log.go:172] (0x40008d6000) (3) Data frame sent\nI0818 23:58:22.733873 101 log.go:172] (0x40008a2840) Data frame received for 5\nI0818 23:58:22.733952 101 log.go:172] (0x4000858000) (5) Data frame handling\nI0818 23:58:22.734113 101 log.go:172] (0x40008a2840) Data frame received for 3\nI0818 23:58:22.734255 101 log.go:172] (0x40008d6000) (3) Data frame handling\nI0818 23:58:22.735861 101 log.go:172] (0x40008a2840) Data frame received for 1\nI0818 23:58:22.735910 101 log.go:172] (0x40008d68c0) (1) Data frame handling\nI0818 23:58:22.735963 101 log.go:172] (0x40008d68c0) (1) Data frame sent\nI0818 23:58:22.737372 101 log.go:172] (0x40008a2840) (0x40008d68c0) Stream removed, broadcasting: 1\nI0818 23:58:22.739973 101 log.go:172] (0x40008a2840) Go away received\nI0818 23:58:22.741917 101 log.go:172] (0x40008a2840) (0x40008d68c0) Stream removed, broadcasting: 1\nI0818 23:58:22.742603 101 log.go:172] (0x40008a2840) (0x40008d6000) Stream removed, broadcasting: 3\nI0818 23:58:22.742727 101 log.go:172] (0x40008a2840) (0x4000858000) Stream removed, broadcasting: 5\n" Aug 18 23:58:22.751: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 18 23:58:22.752: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 18 23:58:22.758: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 18 23:58:32.889: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 18 23:58:32.889: INFO: Waiting for statefulset status.replicas updated to 0 Aug 18 23:58:33.062: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:58:33.064: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:58:33.066: INFO: ss-1 Pending [] Aug 18 23:58:33.066: INFO: Aug 18 23:58:33.067: INFO: StatefulSet ss has not reached scale 3, at 2 Aug 18 23:58:34.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.938286124s Aug 18 23:58:35.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.693413007s Aug 18 23:58:36.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.684018661s Aug 18 23:58:37.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.641543152s Aug 18 23:58:38.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.469182011s Aug 18 23:58:39.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.312801576s Aug 18 23:58:40.793: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.259379024s Aug 18 23:58:41.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.211494116s Aug 18 23:58:42.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 42.526828ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9472 Aug 18 23:58:43.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9472 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 18 23:58:45.502: INFO: stderr: "I0818 23:58:45.357460 123 log.go:172] (0x400051e420) (0x4000952780) Create stream\nI0818 23:58:45.362446 123 log.go:172] (0x400051e420) (0x4000952780) Stream added, broadcasting: 1\nI0818 23:58:45.378458 123 log.go:172] (0x400051e420) Reply frame received for 1\nI0818 23:58:45.379109 123 log.go:172] (0x400051e420) (0x4000952820) Create stream\nI0818 23:58:45.379165 123 log.go:172] (0x400051e420) (0x4000952820) Stream added, broadcasting: 3\nI0818 23:58:45.381628 123 log.go:172] (0x400051e420) Reply frame received for 3\nI0818 23:58:45.382202 123 log.go:172] (0x400051e420) (0x4000866000) Create stream\nI0818 23:58:45.382343 123 log.go:172] (0x400051e420) (0x4000866000) Stream added, broadcasting: 5\nI0818 23:58:45.384814 123 log.go:172] (0x400051e420) Reply frame received for 5\nI0818 23:58:45.481005 123 log.go:172] (0x400051e420) Data frame received for 3\nI0818 23:58:45.481407 123 log.go:172] (0x400051e420) Data frame received for 5\nI0818 23:58:45.481551 123 log.go:172] (0x4000952820) (3) Data frame handling\nI0818 23:58:45.481736 123 log.go:172] (0x4000866000) (5) Data frame handling\nI0818 23:58:45.481965 123 log.go:172] (0x400051e420) Data frame received for 1\nI0818 23:58:45.482110 123 log.go:172] (0x4000952780) (1) Data frame handling\nI0818 23:58:45.482779 123 log.go:172] (0x4000866000) (5) Data frame sent\nI0818 23:58:45.482956 123 log.go:172] (0x4000952780) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0818 23:58:45.483398 123 log.go:172] (0x4000952820) (3) Data frame sent\nI0818 23:58:45.483509 123 log.go:172] (0x400051e420) Data frame received for 3\nI0818 23:58:45.483586 123 log.go:172] (0x4000952820) (3) Data frame handling\nI0818 23:58:45.483815 123 log.go:172] (0x400051e420) Data frame received for 5\nI0818 23:58:45.483897 123 log.go:172] (0x4000866000) (5) Data frame handling\nI0818 23:58:45.484959 123 log.go:172] (0x400051e420) (0x4000952780) Stream removed, broadcasting: 1\nI0818 23:58:45.487402 123 log.go:172] (0x400051e420) Go away received\nI0818 23:58:45.490929 123 log.go:172] (0x400051e420) (0x4000952780) Stream removed, broadcasting: 1\nI0818 23:58:45.491269 123 log.go:172] (0x400051e420) (0x4000952820) Stream removed, broadcasting: 3\nI0818 23:58:45.491528 123 log.go:172] (0x400051e420) (0x4000866000) Stream removed, broadcasting: 5\n" Aug 18 23:58:45.503: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 18 23:58:45.503: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 18 23:58:45.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9472 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 18 23:58:47.010: INFO: stderr: "I0818 23:58:46.899946 145 log.go:172] (0x400085e420) (0x40004246e0) Create stream\nI0818 23:58:46.904297 145 log.go:172] (0x400085e420) (0x40004246e0) Stream added, broadcasting: 1\nI0818 23:58:46.917298 145 log.go:172] (0x400085e420) Reply frame received for 1\nI0818 23:58:46.918025 145 log.go:172] (0x400085e420) (0x4000960000) Create stream\nI0818 23:58:46.918132 145 log.go:172] (0x400085e420) (0x4000960000) Stream added, broadcasting: 3\nI0818 23:58:46.919753 145 log.go:172] (0x400085e420) Reply frame received for 3\nI0818 23:58:46.920105 145 log.go:172] (0x400085e420) (0x4000424780) Create stream\nI0818 23:58:46.920171 145 log.go:172] (0x400085e420) (0x4000424780) Stream added, broadcasting: 5\nI0818 23:58:46.921371 145 log.go:172] (0x400085e420) Reply frame received for 5\nI0818 23:58:46.983631 145 log.go:172] (0x400085e420) Data frame received for 3\nI0818 23:58:46.984040 145 log.go:172] (0x400085e420) Data frame received for 5\nI0818 23:58:46.984221 145 log.go:172] (0x4000424780) (5) Data frame handling\nI0818 23:58:46.984481 145 log.go:172] (0x4000960000) (3) Data frame handling\nI0818 23:58:46.984813 145 log.go:172] (0x400085e420) Data frame received for 1\nI0818 23:58:46.984967 145 log.go:172] (0x40004246e0) (1) Data frame handling\nI0818 23:58:46.985365 145 log.go:172] (0x4000960000) (3) Data frame sent\nI0818 23:58:46.985528 145 log.go:172] (0x40004246e0) (1) Data frame sent\nI0818 23:58:46.985797 145 log.go:172] (0x4000424780) (5) Data frame sent\nI0818 23:58:46.985946 145 log.go:172] (0x400085e420) Data frame received for 5\nI0818 23:58:46.986070 145 log.go:172] (0x4000424780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0818 23:58:46.986469 145 log.go:172] (0x400085e420) Data frame received for 3\nI0818 23:58:46.986699 145 log.go:172] (0x4000960000) (3) Data frame handling\nI0818 23:58:46.988597 145 log.go:172] (0x400085e420) (0x40004246e0) Stream removed, broadcasting: 1\nI0818 23:58:46.992225 145 log.go:172] (0x400085e420) Go away received\nI0818 23:58:47.000317 145 log.go:172] (0x400085e420) (0x40004246e0) Stream removed, broadcasting: 1\nI0818 23:58:47.000520 145 log.go:172] (0x400085e420) (0x4000960000) Stream removed, broadcasting: 3\nI0818 23:58:47.000674 145 log.go:172] (0x400085e420) (0x4000424780) Stream removed, broadcasting: 5\n" Aug 18 23:58:47.011: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 18 23:58:47.011: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 18 23:58:47.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9472 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 18 23:58:48.492: INFO: stderr: "I0818 23:58:48.380789 170 log.go:172] (0x40005b82c0) (0x40008f8140) Create stream\nI0818 23:58:48.385404 170 log.go:172] (0x40005b82c0) (0x40008f8140) Stream added, broadcasting: 1\nI0818 23:58:48.395350 170 log.go:172] (0x40005b82c0) Reply frame received for 1\nI0818 23:58:48.395876 170 log.go:172] (0x40005b82c0) (0x40005fe280) Create stream\nI0818 23:58:48.395939 170 log.go:172] (0x40005b82c0) (0x40005fe280) Stream added, broadcasting: 3\nI0818 23:58:48.397439 170 log.go:172] (0x40005b82c0) Reply frame received for 3\nI0818 23:58:48.397641 170 log.go:172] (0x40005b82c0) (0x40008f8280) Create stream\nI0818 23:58:48.397690 170 log.go:172] (0x40005b82c0) (0x40008f8280) Stream added, broadcasting: 5\nI0818 23:58:48.399246 170 log.go:172] (0x40005b82c0) Reply frame received for 5\nI0818 23:58:48.470757 170 log.go:172] (0x40005b82c0) Data frame received for 5\nI0818 23:58:48.471034 170 log.go:172] (0x40005b82c0) Data frame received for 1\nI0818 23:58:48.471334 170 log.go:172] (0x40005b82c0) Data frame received for 3\nI0818 23:58:48.471445 170 log.go:172] (0x40008f8140) (1) Data frame handling\nI0818 23:58:48.471544 170 log.go:172] (0x40005fe280) (3) Data frame handling\nI0818 23:58:48.471792 170 log.go:172] (0x40008f8280) (5) Data frame handling\nI0818 23:58:48.473102 170 log.go:172] (0x40008f8140) (1) Data frame sent\nI0818 23:58:48.473220 170 log.go:172] (0x40005fe280) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0818 23:58:48.473810 170 log.go:172] (0x40008f8280) (5) Data frame sent\nI0818 23:58:48.474413 170 log.go:172] (0x40005b82c0) Data frame received for 3\nI0818 23:58:48.474521 170 log.go:172] (0x40005fe280) (3) Data frame handling\nI0818 23:58:48.475577 170 log.go:172] (0x40005b82c0) Data frame received for 5\nI0818 23:58:48.476844 170 log.go:172] (0x40005b82c0) (0x40008f8140) Stream removed, broadcasting: 1\nI0818 23:58:48.478033 170 log.go:172] (0x40008f8280) (5) Data frame handling\nI0818 23:58:48.478667 170 log.go:172] (0x40005b82c0) Go away received\nI0818 23:58:48.481794 170 log.go:172] (0x40005b82c0) (0x40008f8140) Stream removed, broadcasting: 1\nI0818 23:58:48.482349 170 log.go:172] (0x40005b82c0) (0x40005fe280) Stream removed, broadcasting: 3\nI0818 23:58:48.482581 170 log.go:172] (0x40005b82c0) (0x40008f8280) Stream removed, broadcasting: 5\n" Aug 18 23:58:48.494: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 18 23:58:48.494: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 18 23:58:48.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 18 23:58:48.754: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 18 23:58:48.754: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 18 23:58:48.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9472 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 18 23:58:50.295: INFO: stderr: "I0818 23:58:50.181531 193 log.go:172] (0x40006e0840) (0x4000956960) Create stream\nI0818 23:58:50.187869 193 log.go:172] (0x40006e0840) (0x4000956960) Stream added, broadcasting: 1\nI0818 23:58:50.203734 193 log.go:172] (0x40006e0840) Reply frame received for 1\nI0818 23:58:50.204462 193 log.go:172] (0x40006e0840) (0x400093c000) Create stream\nI0818 23:58:50.204550 193 log.go:172] (0x40006e0840) (0x400093c000) Stream added, broadcasting: 3\nI0818 23:58:50.206190 193 log.go:172] (0x40006e0840) Reply frame received for 3\nI0818 23:58:50.206516 193 log.go:172] (0x40006e0840) (0x40009560a0) Create stream\nI0818 23:58:50.206613 193 log.go:172] (0x40006e0840) (0x40009560a0) Stream added, broadcasting: 5\nI0818 23:58:50.207777 193 log.go:172] (0x40006e0840) Reply frame received for 5\nI0818 23:58:50.273765 193 log.go:172] (0x40006e0840) Data frame received for 3\nI0818 23:58:50.274045 193 log.go:172] (0x40006e0840) Data frame received for 5\nI0818 23:58:50.274211 193 log.go:172] (0x400093c000) (3) Data frame handling\nI0818 23:58:50.274385 193 log.go:172] (0x40009560a0) (5) Data frame handling\nI0818 23:58:50.274590 193 log.go:172] (0x40006e0840) Data frame received for 1\nI0818 23:58:50.274694 193 log.go:172] (0x4000956960) (1) Data frame handling\nI0818 23:58:50.274964 193 log.go:172] (0x40009560a0) (5) Data frame sent\nI0818 23:58:50.275127 193 log.go:172] (0x400093c000) (3) Data frame sent\nI0818 23:58:50.275328 193 log.go:172] (0x40006e0840) Data frame received for 5\nI0818 23:58:50.275437 193 log.go:172] (0x40009560a0) (5) Data frame handling\nI0818 23:58:50.275498 193 log.go:172] (0x40006e0840) Data frame received for 3\nI0818 23:58:50.275557 193 log.go:172] (0x400093c000) (3) Data frame handling\nI0818 23:58:50.275790 193 log.go:172] (0x4000956960) (1) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0818 23:58:50.277922 193 log.go:172] (0x40006e0840) (0x4000956960) Stream removed, broadcasting: 1\nI0818 23:58:50.280118 193 log.go:172] (0x40006e0840) Go away received\nI0818 23:58:50.282995 193 log.go:172] (0x40006e0840) (0x4000956960) Stream removed, broadcasting: 1\nI0818 23:58:50.283404 193 log.go:172] (0x40006e0840) (0x400093c000) Stream removed, broadcasting: 3\nI0818 23:58:50.283670 193 log.go:172] (0x40006e0840) (0x40009560a0) Stream removed, broadcasting: 5\n" Aug 18 23:58:50.296: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 18 23:58:50.296: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 18 23:58:50.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9472 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 18 23:58:51.850: INFO: stderr: "I0818 23:58:51.690801 215 log.go:172] (0x40008786e0) (0x4000844820) Create stream\nI0818 23:58:51.693568 215 log.go:172] (0x40008786e0) (0x4000844820) Stream added, broadcasting: 1\nI0818 23:58:51.709300 215 log.go:172] (0x40008786e0) Reply frame received for 1\nI0818 23:58:51.710051 215 log.go:172] (0x40008786e0) (0x40008440a0) Create stream\nI0818 23:58:51.710125 215 log.go:172] (0x40008786e0) (0x40008440a0) Stream added, broadcasting: 3\nI0818 23:58:51.711438 215 log.go:172] (0x40008786e0) Reply frame received for 3\nI0818 23:58:51.711664 215 log.go:172] (0x40008786e0) (0x400082c000) Create stream\nI0818 23:58:51.711718 215 log.go:172] (0x40008786e0) (0x400082c000) Stream added, broadcasting: 5\nI0818 23:58:51.712886 215 log.go:172] (0x40008786e0) Reply frame received for 5\nI0818 23:58:51.765185 215 log.go:172] (0x40008786e0) Data frame received for 5\nI0818 23:58:51.765379 215 log.go:172] (0x400082c000) (5) Data frame handling\nI0818 23:58:51.765804 215 log.go:172] (0x400082c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0818 23:58:51.828202 215 log.go:172] (0x40008786e0) Data frame received for 3\nI0818 23:58:51.828439 215 log.go:172] (0x40008440a0) (3) Data frame handling\nI0818 23:58:51.828661 215 log.go:172] (0x40008786e0) Data frame received for 5\nI0818 23:58:51.829051 215 log.go:172] (0x400082c000) (5) Data frame handling\nI0818 23:58:51.829353 215 log.go:172] (0x40008440a0) (3) Data frame sent\nI0818 23:58:51.829567 215 log.go:172] (0x40008786e0) Data frame received for 3\nI0818 23:58:51.829713 215 log.go:172] (0x40008440a0) (3) Data frame handling\nI0818 23:58:51.831014 215 log.go:172] (0x40008786e0) Data frame received for 1\nI0818 23:58:51.831158 215 log.go:172] (0x4000844820) (1) Data frame handling\nI0818 23:58:51.831275 215 log.go:172] (0x4000844820) (1) Data frame sent\nI0818 23:58:51.832373 215 log.go:172] (0x40008786e0) (0x4000844820) Stream removed, broadcasting: 1\nI0818 23:58:51.836908 215 log.go:172] (0x40008786e0) Go away received\nI0818 23:58:51.838489 215 log.go:172] (0x40008786e0) (0x4000844820) Stream removed, broadcasting: 1\nI0818 23:58:51.838979 215 log.go:172] (0x40008786e0) (0x40008440a0) Stream removed, broadcasting: 3\nI0818 23:58:51.839461 215 log.go:172] (0x40008786e0) (0x400082c000) Stream removed, broadcasting: 5\n" Aug 18 23:58:51.851: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 18 23:58:51.851: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 18 23:58:51.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9472 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 18 23:58:53.392: INFO: stderr: "I0818 23:58:53.249982 238 log.go:172] (0x4000138dc0) (0x40003a06e0) Create stream\nI0818 23:58:53.255876 238 log.go:172] (0x4000138dc0) (0x40003a06e0) Stream added, broadcasting: 1\nI0818 23:58:53.273758 238 log.go:172] (0x4000138dc0) Reply frame received for 1\nI0818 23:58:53.275144 238 log.go:172] (0x4000138dc0) (0x40003a0780) Create stream\nI0818 23:58:53.275261 238 log.go:172] (0x4000138dc0) (0x40003a0780) Stream added, broadcasting: 3\nI0818 23:58:53.277444 238 log.go:172] (0x4000138dc0) Reply frame received for 3\nI0818 23:58:53.277879 238 log.go:172] (0x4000138dc0) (0x40003a0820) Create stream\nI0818 23:58:53.277962 238 log.go:172] (0x4000138dc0) (0x40003a0820) Stream added, broadcasting: 5\nI0818 23:58:53.279647 238 log.go:172] (0x4000138dc0) Reply frame received for 5\nI0818 23:58:53.348666 238 log.go:172] (0x4000138dc0) Data frame received for 5\nI0818 23:58:53.348995 238 log.go:172] (0x40003a0820) (5) Data frame handling\nI0818 23:58:53.349424 238 log.go:172] (0x40003a0820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0818 23:58:53.374940 238 log.go:172] (0x4000138dc0) Data frame received for 3\nI0818 23:58:53.375020 238 log.go:172] (0x40003a0780) (3) Data frame handling\nI0818 23:58:53.375104 238 log.go:172] (0x40003a0780) (3) Data frame sent\nI0818 23:58:53.375178 238 log.go:172] (0x4000138dc0) Data frame received for 3\nI0818 23:58:53.375266 238 log.go:172] (0x40003a0780) (3) Data frame handling\nI0818 23:58:53.375552 238 log.go:172] (0x4000138dc0) Data frame received for 5\nI0818 23:58:53.375744 238 log.go:172] (0x40003a0820) (5) Data frame handling\nI0818 23:58:53.377336 238 log.go:172] (0x4000138dc0) Data frame received for 1\nI0818 23:58:53.377447 238 log.go:172] (0x40003a06e0) (1) Data frame handling\nI0818 23:58:53.377577 238 log.go:172] (0x40003a06e0) (1) Data frame sent\nI0818 23:58:53.378297 238 log.go:172] (0x4000138dc0) (0x40003a06e0) Stream removed, broadcasting: 1\nI0818 23:58:53.380344 238 log.go:172] (0x4000138dc0) Go away received\nI0818 23:58:53.384390 238 log.go:172] (0x4000138dc0) (0x40003a06e0) Stream removed, broadcasting: 1\nI0818 23:58:53.384582 238 log.go:172] (0x4000138dc0) (0x40003a0780) Stream removed, broadcasting: 3\nI0818 23:58:53.384791 238 log.go:172] (0x4000138dc0) (0x40003a0820) Stream removed, broadcasting: 5\n" Aug 18 23:58:53.394: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 18 23:58:53.394: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 18 23:58:53.394: INFO: Waiting for statefulset status.replicas updated to 0 Aug 18 23:58:53.401: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 18 23:59:03.414: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 18 23:59:03.414: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 18 23:59:03.414: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 18 23:59:03.471: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:03.471: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:03.472: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:03.473: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:03.473: INFO: Aug 18 23:59:03.473: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 18 23:59:04.603: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:04.604: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:04.605: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:04.605: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:04.605: INFO: Aug 18 23:59:04.605: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 18 23:59:05.615: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:05.615: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:05.615: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:05.616: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:05.616: INFO: Aug 18 23:59:05.616: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 18 23:59:06.623: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:06.623: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:06.623: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:06.623: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:06.623: INFO: Aug 18 23:59:06.624: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 18 23:59:07.633: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:07.634: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:07.634: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:07.634: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:07.635: INFO: Aug 18 23:59:07.635: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 18 23:59:09.011: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:09.012: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:09.012: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:09.012: INFO: Aug 18 23:59:09.012: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 18 23:59:10.102: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:10.102: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:10.103: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:10.103: INFO: Aug 18 23:59:10.103: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 18 23:59:11.367: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:11.367: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:11.368: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:11.368: INFO: Aug 18 23:59:11.368: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 18 23:59:12.376: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:12.376: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:12.377: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:33 +0000 UTC }] Aug 18 23:59:12.377: INFO: Aug 18 23:59:12.377: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 18 23:59:13.453: INFO: POD NODE PHASE GRACE CONDITIONS Aug 18 23:59:13.453: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-18 23:58:11 +0000 UTC }] Aug 18 23:59:13.454: INFO: Aug 18 23:59:13.454: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9472 Aug 18 23:59:14.461: INFO: Scaling statefulset ss to 0 Aug 18 23:59:14.471: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 18 23:59:14.475: INFO: Deleting all statefulset in ns statefulset-9472 Aug 18 23:59:14.480: INFO: Scaling statefulset ss to 0 Aug 18 23:59:14.491: INFO: Waiting for statefulset status.replicas updated to 0 Aug 18 23:59:14.494: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:59:14.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9472" for this suite. Aug 18 23:59:22.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:59:22.673: INFO: namespace statefulset-9472 deletion completed in 8.150774305s • [SLOW TEST:72.176 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:59:22.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 18 23:59:22.802: INFO: Waiting up to 5m0s for pod "pod-985cd845-d3a3-4828-9a53-cb8083057eda" in namespace "emptydir-4768" to be "success or failure" Aug 18 23:59:22.847: INFO: Pod "pod-985cd845-d3a3-4828-9a53-cb8083057eda": Phase="Pending", Reason="", readiness=false. Elapsed: 44.242972ms Aug 18 23:59:24.852: INFO: Pod "pod-985cd845-d3a3-4828-9a53-cb8083057eda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049679954s Aug 18 23:59:26.859: INFO: Pod "pod-985cd845-d3a3-4828-9a53-cb8083057eda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056573979s Aug 18 23:59:28.866: INFO: Pod "pod-985cd845-d3a3-4828-9a53-cb8083057eda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063857891s STEP: Saw pod success Aug 18 23:59:28.866: INFO: Pod "pod-985cd845-d3a3-4828-9a53-cb8083057eda" satisfied condition "success or failure" Aug 18 23:59:28.871: INFO: Trying to get logs from node iruya-worker2 pod pod-985cd845-d3a3-4828-9a53-cb8083057eda container test-container: STEP: delete the pod Aug 18 23:59:29.077: INFO: Waiting for pod pod-985cd845-d3a3-4828-9a53-cb8083057eda to disappear Aug 18 23:59:29.151: INFO: Pod pod-985cd845-d3a3-4828-9a53-cb8083057eda no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:59:29.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4768" for this suite. Aug 18 23:59:37.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:59:37.540: INFO: namespace emptydir-4768 deletion completed in 8.3816665s • [SLOW TEST:14.861 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:59:37.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1738/secret-test-7810414a-54a6-46aa-93ba-52314a2d23bc STEP: Creating a pod to test consume secrets Aug 18 23:59:38.987: INFO: Waiting up to 5m0s for pod "pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f" in namespace "secrets-1738" to be "success or failure" Aug 18 23:59:39.050: INFO: Pod "pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f": Phase="Pending", Reason="", readiness=false. Elapsed: 62.133605ms Aug 18 23:59:41.446: INFO: Pod "pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.458302315s Aug 18 23:59:44.070: INFO: Pod "pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.081918551s Aug 18 23:59:46.076: INFO: Pod "pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.088111622s STEP: Saw pod success Aug 18 23:59:46.076: INFO: Pod "pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f" satisfied condition "success or failure" Aug 18 23:59:46.081: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f container env-test: STEP: delete the pod Aug 18 23:59:46.450: INFO: Waiting for pod pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f to disappear Aug 18 23:59:46.473: INFO: Pod pod-configmaps-3fb6322b-7d17-4556-bc84-72f85214b04f no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:59:46.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1738" for this suite. Aug 18 23:59:52.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 18 23:59:52.765: INFO: namespace secrets-1738 deletion completed in 6.284785271s • [SLOW TEST:15.224 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 18 23:59:52.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 18 23:59:53.276: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 18 23:59:57.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6763" for this suite. Aug 19 00:00:45.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 00:00:45.543: INFO: namespace pods-6763 deletion completed in 48.155527988s • [SLOW TEST:52.777 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 00:00:45.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 00:00:45.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5222" for this suite. Aug 19 00:00:51.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 00:00:51.743: INFO: namespace services-5222 deletion completed in 6.126354496s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.199 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 00:00:51.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d Aug 19 00:00:51.842: INFO: Pod name my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d: Found 0 pods out of 1 Aug 19 00:00:56.849: INFO: Pod name my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d: Found 1 pods out of 1 Aug 19 00:00:56.849: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d" are running Aug 19 00:00:56.854: INFO: Pod "my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d-vgk6r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 00:00:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 00:00:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 00:00:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 00:00:51 +0000 UTC Reason: Message:}]) Aug 19 00:00:56.855: INFO: Trying to dial the pod Aug 19 00:01:01.875: INFO: Controller my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d: Got expected result from replica 1 [my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d-vgk6r]: "my-hostname-basic-e00785b9-c46e-4d9c-a0d4-e5586d423f9d-vgk6r", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 19 00:01:01.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5944" for this suite. Aug 19 00:01:09.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 19 00:01:10.018: INFO: namespace replication-controller-5944 deletion completed in 8.135125815s • [SLOW TEST:18.272 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 19 00:01:10.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 19 00:01:10.514: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 00:01:16.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9466'
Aug 19 00:01:18.273: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 00:01:18.273: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Aug 19 00:01:18.291: INFO: scanned /root for discovery docs: 
Aug 19 00:01:18.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9466'
Aug 19 00:01:37.957: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 19 00:01:37.957: INFO: stdout: "Created e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae\nScaling up e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug 19 00:01:37.958: INFO: stdout: "Created e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae\nScaling up e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug 19 00:01:37.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9466'
Aug 19 00:01:39.214: INFO: stderr: ""
Aug 19 00:01:39.214: INFO: stdout: "e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae-bhmhp "
Aug 19 00:01:39.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae-bhmhp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9466'
Aug 19 00:01:40.525: INFO: stderr: ""
Aug 19 00:01:40.525: INFO: stdout: "true"
Aug 19 00:01:40.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae-bhmhp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9466'
Aug 19 00:01:41.869: INFO: stderr: ""
Aug 19 00:01:41.869: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug 19 00:01:41.869: INFO: e2e-test-nginx-rc-e48ceaf98c65ea5183ef1404d21256ae-bhmhp is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Aug 19 00:01:41.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9466'
Aug 19 00:01:43.496: INFO: stderr: ""
Aug 19 00:01:43.496: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:01:43.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9466" for this suite.
Aug 19 00:01:56.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:01:56.695: INFO: namespace kubectl-9466 deletion completed in 12.809892401s

• [SLOW TEST:39.841 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:01:56.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug 19 00:01:57.246: INFO: Waiting up to 5m0s for pod "client-containers-18dfe918-0af4-440d-8ae5-99a670631141" in namespace "containers-8201" to be "success or failure"
Aug 19 00:01:57.261: INFO: Pod "client-containers-18dfe918-0af4-440d-8ae5-99a670631141": Phase="Pending", Reason="", readiness=false. Elapsed: 15.049768ms
Aug 19 00:01:59.407: INFO: Pod "client-containers-18dfe918-0af4-440d-8ae5-99a670631141": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160471392s
Aug 19 00:02:01.414: INFO: Pod "client-containers-18dfe918-0af4-440d-8ae5-99a670631141": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167839638s
Aug 19 00:02:03.442: INFO: Pod "client-containers-18dfe918-0af4-440d-8ae5-99a670631141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.195646697s
STEP: Saw pod success
Aug 19 00:02:03.442: INFO: Pod "client-containers-18dfe918-0af4-440d-8ae5-99a670631141" satisfied condition "success or failure"
Aug 19 00:02:03.448: INFO: Trying to get logs from node iruya-worker pod client-containers-18dfe918-0af4-440d-8ae5-99a670631141 container test-container: 
STEP: delete the pod
Aug 19 00:02:03.581: INFO: Waiting for pod client-containers-18dfe918-0af4-440d-8ae5-99a670631141 to disappear
Aug 19 00:02:03.602: INFO: Pod client-containers-18dfe918-0af4-440d-8ae5-99a670631141 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:02:03.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8201" for this suite.
Aug 19 00:02:09.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:02:09.835: INFO: namespace containers-8201 deletion completed in 6.225005934s

• [SLOW TEST:13.137 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:02:09.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:02:09.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d" in namespace "projected-2511" to be "success or failure"
Aug 19 00:02:09.938: INFO: Pod "downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.777292ms
Aug 19 00:02:12.216: INFO: Pod "downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286938367s
Aug 19 00:02:14.224: INFO: Pod "downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294835162s
Aug 19 00:02:16.231: INFO: Pod "downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.30212844s
STEP: Saw pod success
Aug 19 00:02:16.231: INFO: Pod "downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d" satisfied condition "success or failure"
Aug 19 00:02:16.244: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d container client-container: 
STEP: delete the pod
Aug 19 00:02:16.275: INFO: Waiting for pod downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d to disappear
Aug 19 00:02:16.304: INFO: Pod downwardapi-volume-9af02947-0831-4cc4-b154-0339732caf3d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:02:16.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2511" for this suite.
Aug 19 00:02:22.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:02:22.516: INFO: namespace projected-2511 deletion completed in 6.201835071s

• [SLOW TEST:12.679 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:02:22.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:02:22.629: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3" in namespace "projected-4612" to be "success or failure"
Aug 19 00:02:22.640: INFO: Pod "downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.742005ms
Aug 19 00:02:25.291: INFO: Pod "downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662128503s
Aug 19 00:02:27.298: INFO: Pod "downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.668907211s
Aug 19 00:02:29.303: INFO: Pod "downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.674567832s
STEP: Saw pod success
Aug 19 00:02:29.304: INFO: Pod "downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3" satisfied condition "success or failure"
Aug 19 00:02:29.310: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3 container client-container: 
STEP: delete the pod
Aug 19 00:02:29.408: INFO: Waiting for pod downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3 to disappear
Aug 19 00:02:29.454: INFO: Pod downwardapi-volume-cae6db1e-8e1d-4063-b813-8a94d97e03c3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:02:29.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4612" for this suite.
Aug 19 00:02:35.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:02:35.622: INFO: namespace projected-4612 deletion completed in 6.15679052s

• [SLOW TEST:13.103 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:02:35.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1364, will wait for the garbage collector to delete the pods
Aug 19 00:02:41.877: INFO: Deleting Job.batch foo took: 9.773691ms
Aug 19 00:02:42.178: INFO: Terminating Job.batch foo pods took: 300.982072ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:03:23.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1364" for this suite.
Aug 19 00:03:29.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:03:29.841: INFO: namespace job-1364 deletion completed in 6.140682954s

• [SLOW TEST:54.217 seconds]
[sig-apps] Job
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:03:29.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Aug 19 00:03:30.852: INFO: created pod pod-service-account-defaultsa
Aug 19 00:03:30.852: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 19 00:03:30.875: INFO: created pod pod-service-account-mountsa
Aug 19 00:03:30.875: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 19 00:03:30.923: INFO: created pod pod-service-account-nomountsa
Aug 19 00:03:30.923: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 19 00:03:31.079: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 19 00:03:31.079: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 19 00:03:31.362: INFO: created pod pod-service-account-mountsa-mountspec
Aug 19 00:03:31.362: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 19 00:03:31.639: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 19 00:03:31.639: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 19 00:03:31.894: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 19 00:03:31.894: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 19 00:03:31.949: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 19 00:03:31.950: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 19 00:03:31.976: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 19 00:03:31.976: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:03:31.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1487" for this suite.
Aug 19 00:04:04.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:04:04.930: INFO: namespace svcaccounts-1487 deletion completed in 31.794211385s

• [SLOW TEST:35.088 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:04:04.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1932
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 19 00:04:05.088: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 19 00:04:39.906: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.222:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1932 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:04:39.906: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:04:39.979940       7 log.go:172] (0x400144ed10) (0x4002213220) Create stream
I0819 00:04:39.980364       7 log.go:172] (0x400144ed10) (0x4002213220) Stream added, broadcasting: 1
I0819 00:04:40.001446       7 log.go:172] (0x400144ed10) Reply frame received for 1
I0819 00:04:40.002090       7 log.go:172] (0x400144ed10) (0x40022132c0) Create stream
I0819 00:04:40.002158       7 log.go:172] (0x400144ed10) (0x40022132c0) Stream added, broadcasting: 3
I0819 00:04:40.004201       7 log.go:172] (0x400144ed10) Reply frame received for 3
I0819 00:04:40.004673       7 log.go:172] (0x400144ed10) (0x4002b51180) Create stream
I0819 00:04:40.004851       7 log.go:172] (0x400144ed10) (0x4002b51180) Stream added, broadcasting: 5
I0819 00:04:40.006328       7 log.go:172] (0x400144ed10) Reply frame received for 5
I0819 00:04:40.080412       7 log.go:172] (0x400144ed10) Data frame received for 3
I0819 00:04:40.080825       7 log.go:172] (0x40022132c0) (3) Data frame handling
I0819 00:04:40.081066       7 log.go:172] (0x400144ed10) Data frame received for 5
I0819 00:04:40.081174       7 log.go:172] (0x4002b51180) (5) Data frame handling
I0819 00:04:40.081338       7 log.go:172] (0x400144ed10) Data frame received for 1
I0819 00:04:40.081541       7 log.go:172] (0x4002213220) (1) Data frame handling
I0819 00:04:40.083576       7 log.go:172] (0x4002213220) (1) Data frame sent
I0819 00:04:40.084410       7 log.go:172] (0x40022132c0) (3) Data frame sent
I0819 00:04:40.084570       7 log.go:172] (0x400144ed10) Data frame received for 3
I0819 00:04:40.084689       7 log.go:172] (0x40022132c0) (3) Data frame handling
I0819 00:04:40.085086       7 log.go:172] (0x400144ed10) (0x4002213220) Stream removed, broadcasting: 1
I0819 00:04:40.087946       7 log.go:172] (0x400144ed10) Go away received
I0819 00:04:40.090580       7 log.go:172] (0x400144ed10) (0x4002213220) Stream removed, broadcasting: 1
I0819 00:04:40.090897       7 log.go:172] (0x400144ed10) (0x40022132c0) Stream removed, broadcasting: 3
I0819 00:04:40.091131       7 log.go:172] (0x400144ed10) (0x4002b51180) Stream removed, broadcasting: 5
Aug 19 00:04:40.092: INFO: Found all expected endpoints: [netserver-0]
Aug 19 00:04:40.097: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.89:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1932 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:04:40.097: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:04:40.203601       7 log.go:172] (0x4001384420) (0x400193a5a0) Create stream
I0819 00:04:40.203790       7 log.go:172] (0x4001384420) (0x400193a5a0) Stream added, broadcasting: 1
I0819 00:04:40.207075       7 log.go:172] (0x4001384420) Reply frame received for 1
I0819 00:04:40.207210       7 log.go:172] (0x4001384420) (0x4003004f00) Create stream
I0819 00:04:40.207277       7 log.go:172] (0x4001384420) (0x4003004f00) Stream added, broadcasting: 3
I0819 00:04:40.208581       7 log.go:172] (0x4001384420) Reply frame received for 3
I0819 00:04:40.208876       7 log.go:172] (0x4001384420) (0x400193a640) Create stream
I0819 00:04:40.208966       7 log.go:172] (0x4001384420) (0x400193a640) Stream added, broadcasting: 5
I0819 00:04:40.210253       7 log.go:172] (0x4001384420) Reply frame received for 5
I0819 00:04:40.293141       7 log.go:172] (0x4001384420) Data frame received for 5
I0819 00:04:40.293289       7 log.go:172] (0x400193a640) (5) Data frame handling
I0819 00:04:40.293368       7 log.go:172] (0x4001384420) Data frame received for 3
I0819 00:04:40.293463       7 log.go:172] (0x4003004f00) (3) Data frame handling
I0819 00:04:40.293597       7 log.go:172] (0x4003004f00) (3) Data frame sent
I0819 00:04:40.293700       7 log.go:172] (0x4001384420) Data frame received for 3
I0819 00:04:40.293769       7 log.go:172] (0x4003004f00) (3) Data frame handling
I0819 00:04:40.294463       7 log.go:172] (0x4001384420) Data frame received for 1
I0819 00:04:40.294533       7 log.go:172] (0x400193a5a0) (1) Data frame handling
I0819 00:04:40.294615       7 log.go:172] (0x400193a5a0) (1) Data frame sent
I0819 00:04:40.294701       7 log.go:172] (0x4001384420) (0x400193a5a0) Stream removed, broadcasting: 1
I0819 00:04:40.294787       7 log.go:172] (0x4001384420) Go away received
I0819 00:04:40.295362       7 log.go:172] (0x4001384420) (0x400193a5a0) Stream removed, broadcasting: 1
I0819 00:04:40.295499       7 log.go:172] (0x4001384420) (0x4003004f00) Stream removed, broadcasting: 3
I0819 00:04:40.295579       7 log.go:172] (0x4001384420) (0x400193a640) Stream removed, broadcasting: 5
Aug 19 00:04:40.295: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:04:40.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1932" for this suite.
Aug 19 00:05:04.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:05:04.489: INFO: namespace pod-network-test-1932 deletion completed in 24.151610526s

• [SLOW TEST:59.559 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:05:04.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-9796
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9796 to expose endpoints map[]
Aug 19 00:05:04.877: INFO: successfully validated that service endpoint-test2 in namespace services-9796 exposes endpoints map[] (16.52421ms elapsed)
STEP: Creating pod pod1 in namespace services-9796
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9796 to expose endpoints map[pod1:[80]]
Aug 19 00:05:09.860: INFO: successfully validated that service endpoint-test2 in namespace services-9796 exposes endpoints map[pod1:[80]] (4.975333805s elapsed)
STEP: Creating pod pod2 in namespace services-9796
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9796 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 19 00:05:14.276: INFO: Unexpected endpoints: found map[426d4f9f-e05d-443a-b679-7c2295b354e2:[80]], expected map[pod1:[80] pod2:[80]] (4.409744707s elapsed, will retry)
Aug 19 00:05:16.604: INFO: successfully validated that service endpoint-test2 in namespace services-9796 exposes endpoints map[pod1:[80] pod2:[80]] (6.737045598s elapsed)
STEP: Deleting pod pod1 in namespace services-9796
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9796 to expose endpoints map[pod2:[80]]
Aug 19 00:05:16.651: INFO: successfully validated that service endpoint-test2 in namespace services-9796 exposes endpoints map[pod2:[80]] (40.471362ms elapsed)
STEP: Deleting pod pod2 in namespace services-9796
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9796 to expose endpoints map[]
Aug 19 00:05:16.687: INFO: successfully validated that service endpoint-test2 in namespace services-9796 exposes endpoints map[] (31.266025ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:05:17.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9796" for this suite.
Aug 19 00:05:39.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:05:39.524: INFO: namespace services-9796 deletion completed in 22.323839395s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:35.033 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:05:39.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 00:05:39.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4988'
Aug 19 00:05:46.696: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 00:05:46.697: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug 19 00:05:46.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4988'
Aug 19 00:05:49.566: INFO: stderr: ""
Aug 19 00:05:49.566: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:05:49.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4988" for this suite.
Aug 19 00:05:58.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:05:59.793: INFO: namespace kubectl-4988 deletion completed in 9.97866957s

• [SLOW TEST:20.266 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:05:59.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6359
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6359
STEP: Creating statefulset with conflicting port in namespace statefulset-6359
STEP: Waiting until pod test-pod will start running in namespace statefulset-6359
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6359
Aug 19 00:06:05.996: INFO: Observed stateful pod in namespace: statefulset-6359, name: ss-0, uid: 9c61a14b-e707-48f1-a7e0-e443ab773f08, status phase: Pending. Waiting for statefulset controller to delete.
Aug 19 00:06:06.108: INFO: Observed stateful pod in namespace: statefulset-6359, name: ss-0, uid: 9c61a14b-e707-48f1-a7e0-e443ab773f08, status phase: Failed. Waiting for statefulset controller to delete.
Aug 19 00:06:06.113: INFO: Observed stateful pod in namespace: statefulset-6359, name: ss-0, uid: 9c61a14b-e707-48f1-a7e0-e443ab773f08, status phase: Failed. Waiting for statefulset controller to delete.
Aug 19 00:06:06.139: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6359
STEP: Removing pod with conflicting port in namespace statefulset-6359
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6359 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 00:06:10.261: INFO: Deleting all statefulset in ns statefulset-6359
Aug 19 00:06:10.266: INFO: Scaling statefulset ss to 0
Aug 19 00:06:30.291: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 00:06:30.296: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:06:30.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6359" for this suite.
Aug 19 00:06:38.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:06:38.519: INFO: namespace statefulset-6359 deletion completed in 8.155452923s

• [SLOW TEST:38.724 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:06:38.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-7eb84366-a857-48f6-9fcb-fc7c78a28183
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:06:38.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7930" for this suite.
Aug 19 00:06:44.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:06:45.115: INFO: namespace configmap-7930 deletion completed in 6.254119127s

• [SLOW TEST:6.595 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:06:45.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 00:06:45.257: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:06:55.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5294" for this suite.
Aug 19 00:07:17.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:07:17.419: INFO: namespace init-container-5294 deletion completed in 22.148923151s

• [SLOW TEST:32.303 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:07:17.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 00:07:22.113: INFO: Successfully updated pod "labelsupdate7254c303-5db6-491b-abb7-9ef16dfb39a8"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:07:26.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2983" for this suite.
Aug 19 00:07:48.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:07:48.329: INFO: namespace projected-2983 deletion completed in 22.179546781s

• [SLOW TEST:30.909 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:07:48.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Aug 19 00:07:48.445: INFO: Waiting up to 5m0s for pod "var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5" in namespace "var-expansion-9927" to be "success or failure"
Aug 19 00:07:48.454: INFO: Pod "var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.283472ms
Aug 19 00:07:50.592: INFO: Pod "var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146639809s
Aug 19 00:07:52.607: INFO: Pod "var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162606033s
STEP: Saw pod success
Aug 19 00:07:52.608: INFO: Pod "var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5" satisfied condition "success or failure"
Aug 19 00:07:52.617: INFO: Trying to get logs from node iruya-worker pod var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5 container dapi-container: 
STEP: delete the pod
Aug 19 00:07:52.639: INFO: Waiting for pod var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5 to disappear
Aug 19 00:07:52.650: INFO: Pod var-expansion-5ed5af63-1af2-4ac3-bbf7-2acb93ba60d5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:07:52.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9927" for this suite.
Aug 19 00:08:00.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:08:01.028: INFO: namespace var-expansion-9927 deletion completed in 8.370435271s

• [SLOW TEST:12.698 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:08:01.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:08:01.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f" in namespace "projected-1130" to be "success or failure"
Aug 19 00:08:01.130: INFO: Pod "downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.395096ms
Aug 19 00:08:03.173: INFO: Pod "downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064277359s
Aug 19 00:08:05.180: INFO: Pod "downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f": Phase="Running", Reason="", readiness=true. Elapsed: 4.071632346s
Aug 19 00:08:07.186: INFO: Pod "downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077852217s
STEP: Saw pod success
Aug 19 00:08:07.187: INFO: Pod "downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f" satisfied condition "success or failure"
Aug 19 00:08:07.191: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f container client-container: 
STEP: delete the pod
Aug 19 00:08:07.263: INFO: Waiting for pod downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f to disappear
Aug 19 00:08:07.326: INFO: Pod downwardapi-volume-e9fb3058-0afc-4e0f-8846-001055f8944f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:08:07.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1130" for this suite.
Aug 19 00:08:13.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:08:13.598: INFO: namespace projected-1130 deletion completed in 6.261658277s

• [SLOW TEST:12.569 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:08:13.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:08:13.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16" in namespace "downward-api-8216" to be "success or failure"
Aug 19 00:08:13.832: INFO: Pod "downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16": Phase="Pending", Reason="", readiness=false. Elapsed: 33.662578ms
Aug 19 00:08:16.042: INFO: Pod "downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243367555s
Aug 19 00:08:18.049: INFO: Pod "downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250420997s
Aug 19 00:08:20.055: INFO: Pod "downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257177205s
STEP: Saw pod success
Aug 19 00:08:20.056: INFO: Pod "downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16" satisfied condition "success or failure"
Aug 19 00:08:20.065: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16 container client-container: 
STEP: delete the pod
Aug 19 00:08:20.096: INFO: Waiting for pod downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16 to disappear
Aug 19 00:08:20.107: INFO: Pod downwardapi-volume-9784b56d-02f9-4da7-829a-37945e415a16 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:08:20.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8216" for this suite.
Aug 19 00:08:26.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:08:26.311: INFO: namespace downward-api-8216 deletion completed in 6.194677735s

• [SLOW TEST:12.713 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:08:26.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-210e3d8b-cd17-48e9-964c-21cd3f263c0b
STEP: Creating secret with name s-test-opt-upd-929c696b-bb38-4dbc-b907-5ea993ff5eba
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-210e3d8b-cd17-48e9-964c-21cd3f263c0b
STEP: Updating secret s-test-opt-upd-929c696b-bb38-4dbc-b907-5ea993ff5eba
STEP: Creating secret with name s-test-opt-create-2ef9c36c-0827-40b0-a460-b17b7c8e2643
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:08:36.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8009" for this suite.
Aug 19 00:08:58.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:08:58.747: INFO: namespace secrets-8009 deletion completed in 22.142893637s

• [SLOW TEST:32.435 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:08:58.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 19 00:09:04.215: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:09:04.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8978" for this suite.
Aug 19 00:09:10.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:09:10.440: INFO: namespace container-runtime-8978 deletion completed in 6.168422243s

• [SLOW TEST:11.691 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:09:10.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 19 00:09:10.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7047'
Aug 19 00:09:12.228: INFO: stderr: ""
Aug 19 00:09:12.229: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 00:09:12.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7047'
Aug 19 00:09:13.539: INFO: stderr: ""
Aug 19 00:09:13.539: INFO: stdout: "update-demo-nautilus-cwh5j update-demo-nautilus-dtqff "
Aug 19 00:09:13.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwh5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7047'
Aug 19 00:09:14.796: INFO: stderr: ""
Aug 19 00:09:14.796: INFO: stdout: ""
Aug 19 00:09:14.796: INFO: update-demo-nautilus-cwh5j is created but not running
Aug 19 00:09:19.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7047'
Aug 19 00:09:21.154: INFO: stderr: ""
Aug 19 00:09:21.154: INFO: stdout: "update-demo-nautilus-cwh5j update-demo-nautilus-dtqff "
Aug 19 00:09:21.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwh5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7047'
Aug 19 00:09:22.439: INFO: stderr: ""
Aug 19 00:09:22.439: INFO: stdout: "true"
Aug 19 00:09:22.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cwh5j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7047'
Aug 19 00:09:23.722: INFO: stderr: ""
Aug 19 00:09:23.722: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 00:09:23.722: INFO: validating pod update-demo-nautilus-cwh5j
Aug 19 00:09:23.729: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 00:09:23.730: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 00:09:23.730: INFO: update-demo-nautilus-cwh5j is verified up and running
Aug 19 00:09:23.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtqff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7047'
Aug 19 00:09:25.027: INFO: stderr: ""
Aug 19 00:09:25.027: INFO: stdout: "true"
Aug 19 00:09:25.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dtqff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7047'
Aug 19 00:09:26.319: INFO: stderr: ""
Aug 19 00:09:26.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 00:09:26.319: INFO: validating pod update-demo-nautilus-dtqff
Aug 19 00:09:26.324: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 00:09:26.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 00:09:26.324: INFO: update-demo-nautilus-dtqff is verified up and running
STEP: using delete to clean up resources
Aug 19 00:09:26.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7047'
Aug 19 00:09:27.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:09:27.615: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 19 00:09:27.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7047'
Aug 19 00:09:28.993: INFO: stderr: "No resources found.\n"
Aug 19 00:09:28.993: INFO: stdout: ""
Aug 19 00:09:28.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7047 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 00:09:30.285: INFO: stderr: ""
Aug 19 00:09:30.285: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:09:30.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7047" for this suite.
Aug 19 00:09:36.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:09:36.453: INFO: namespace kubectl-7047 deletion completed in 6.155554801s

• [SLOW TEST:26.006 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:09:36.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 19 00:09:36.554: INFO: Waiting up to 5m0s for pod "pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718" in namespace "emptydir-9283" to be "success or failure"
Aug 19 00:09:36.629: INFO: Pod "pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718": Phase="Pending", Reason="", readiness=false. Elapsed: 74.938711ms
Aug 19 00:09:38.922: INFO: Pod "pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368404382s
Aug 19 00:09:41.095: INFO: Pod "pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.54139948s
STEP: Saw pod success
Aug 19 00:09:41.096: INFO: Pod "pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718" satisfied condition "success or failure"
Aug 19 00:09:41.228: INFO: Trying to get logs from node iruya-worker2 pod pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718 container test-container: 
STEP: delete the pod
Aug 19 00:09:41.314: INFO: Waiting for pod pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718 to disappear
Aug 19 00:09:41.382: INFO: Pod pod-6342e0e4-9f96-4be1-bd3e-7f47a8c43718 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:09:41.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9283" for this suite.
Aug 19 00:09:47.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:09:47.526: INFO: namespace emptydir-9283 deletion completed in 6.136059476s

• [SLOW TEST:11.072 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:09:47.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug 19 00:09:47.573: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 19 00:09:47.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458'
Aug 19 00:09:49.279: INFO: stderr: ""
Aug 19 00:09:49.279: INFO: stdout: "service/redis-slave created\n"
Aug 19 00:09:49.281: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 19 00:09:49.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458'
Aug 19 00:09:50.963: INFO: stderr: ""
Aug 19 00:09:50.963: INFO: stdout: "service/redis-master created\n"
Aug 19 00:09:50.965: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 19 00:09:50.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458'
Aug 19 00:09:52.652: INFO: stderr: ""
Aug 19 00:09:52.652: INFO: stdout: "service/frontend created\n"
Aug 19 00:09:52.659: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 19 00:09:52.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458'
Aug 19 00:09:54.339: INFO: stderr: ""
Aug 19 00:09:54.339: INFO: stdout: "deployment.apps/frontend created\n"
Aug 19 00:09:54.341: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 19 00:09:54.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458'
Aug 19 00:09:56.370: INFO: stderr: ""
Aug 19 00:09:56.370: INFO: stdout: "deployment.apps/redis-master created\n"
Aug 19 00:09:56.372: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 19 00:09:56.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458'
Aug 19 00:09:58.459: INFO: stderr: ""
Aug 19 00:09:58.459: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug 19 00:09:58.460: INFO: Waiting for all frontend pods to be Running.
Aug 19 00:10:08.513: INFO: Waiting for frontend to serve content.
Aug 19 00:10:08.533: INFO: Trying to add a new entry to the guestbook.
Aug 19 00:10:08.551: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 19 00:10:08.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6458'
Aug 19 00:10:09.941: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:10:09.941: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 00:10:09.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6458'
Aug 19 00:10:11.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:10:11.257: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 00:10:11.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6458'
Aug 19 00:10:12.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:10:12.607: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 00:10:12.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6458'
Aug 19 00:10:13.873: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:10:13.873: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 00:10:13.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6458'
Aug 19 00:10:15.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:10:15.569: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 19 00:10:15.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6458'
Aug 19 00:10:17.891: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:10:17.892: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:10:17.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6458" for this suite.
Aug 19 00:11:04.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:11:04.749: INFO: namespace kubectl-6458 deletion completed in 46.504105287s

• [SLOW TEST:77.223 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:11:04.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 19 00:11:16.982: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 00:11:16.987: INFO: Pod pod-with-prestop-http-hook still exists
Aug 19 00:11:18.988: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 00:11:18.999: INFO: Pod pod-with-prestop-http-hook still exists
Aug 19 00:11:20.988: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 00:11:21.133: INFO: Pod pod-with-prestop-http-hook still exists
Aug 19 00:11:22.988: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 00:11:22.994: INFO: Pod pod-with-prestop-http-hook still exists
Aug 19 00:11:24.988: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 19 00:11:24.999: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:11:25.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8328" for this suite.
Aug 19 00:11:49.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:11:49.935: INFO: namespace container-lifecycle-hook-8328 deletion completed in 24.918164575s

• [SLOW TEST:45.185 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:11:49.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 19 00:11:50.512: INFO: Waiting up to 5m0s for pod "pod-d2539834-859f-48e2-8dba-b030348857e9" in namespace "emptydir-4888" to be "success or failure"
Aug 19 00:11:50.961: INFO: Pod "pod-d2539834-859f-48e2-8dba-b030348857e9": Phase="Pending", Reason="", readiness=false. Elapsed: 449.430927ms
Aug 19 00:11:53.096: INFO: Pod "pod-d2539834-859f-48e2-8dba-b030348857e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58437683s
Aug 19 00:11:55.332: INFO: Pod "pod-d2539834-859f-48e2-8dba-b030348857e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.820242921s
Aug 19 00:11:57.357: INFO: Pod "pod-d2539834-859f-48e2-8dba-b030348857e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.845736112s
Aug 19 00:11:59.363: INFO: Pod "pod-d2539834-859f-48e2-8dba-b030348857e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.851056683s
STEP: Saw pod success
Aug 19 00:11:59.363: INFO: Pod "pod-d2539834-859f-48e2-8dba-b030348857e9" satisfied condition "success or failure"
Aug 19 00:11:59.366: INFO: Trying to get logs from node iruya-worker2 pod pod-d2539834-859f-48e2-8dba-b030348857e9 container test-container: 
STEP: delete the pod
Aug 19 00:11:59.442: INFO: Waiting for pod pod-d2539834-859f-48e2-8dba-b030348857e9 to disappear
Aug 19 00:11:59.498: INFO: Pod pod-d2539834-859f-48e2-8dba-b030348857e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:11:59.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4888" for this suite.
Aug 19 00:12:05.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:12:05.913: INFO: namespace emptydir-4888 deletion completed in 6.407802475s

• [SLOW TEST:15.975 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:12:05.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:12:17.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6569" for this suite.
Aug 19 00:12:57.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:12:57.193: INFO: namespace kubelet-test-6569 deletion completed in 40.179778775s

• [SLOW TEST:51.279 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:12:57.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4517
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 19 00:12:57.829: INFO: Found 0 stateful pods, waiting for 3
Aug 19 00:13:08.019: INFO: Found 2 stateful pods, waiting for 3
Aug 19 00:13:17.840: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:13:17.840: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:13:17.840: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 19 00:13:17.914: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 19 00:13:28.179: INFO: Updating stateful set ss2
Aug 19 00:13:28.380: INFO: Waiting for Pod statefulset-4517/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 19 00:13:41.283: INFO: Found 2 stateful pods, waiting for 3
Aug 19 00:13:51.291: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:13:51.291: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:13:51.291: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 19 00:14:01.293: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:14:01.293: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:14:01.294: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 19 00:14:01.327: INFO: Updating stateful set ss2
Aug 19 00:14:01.372: INFO: Waiting for Pod statefulset-4517/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 00:14:11.409: INFO: Updating stateful set ss2
Aug 19 00:14:11.778: INFO: Waiting for StatefulSet statefulset-4517/ss2 to complete update
Aug 19 00:14:11.779: INFO: Waiting for Pod statefulset-4517/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 00:14:21.790: INFO: Waiting for StatefulSet statefulset-4517/ss2 to complete update
Aug 19 00:14:21.790: INFO: Waiting for Pod statefulset-4517/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 00:14:31.928: INFO: Deleting all statefulset in ns statefulset-4517
Aug 19 00:14:32.106: INFO: Scaling statefulset ss2 to 0
Aug 19 00:14:52.586: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 00:14:52.590: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:14:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4517" for this suite.
Aug 19 00:15:00.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:15:00.793: INFO: namespace statefulset-4517 deletion completed in 8.158012118s

• [SLOW TEST:123.598 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:15:00.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Aug 19 00:15:04.926: INFO: Pod pod-hostip-1c0adcf2-32ab-48c4-a786-0de6ec1650be has hostIP: 172.18.0.9
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:15:04.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1195" for this suite.
Aug 19 00:15:33.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:15:33.341: INFO: namespace pods-1195 deletion completed in 28.405678608s

• [SLOW TEST:32.547 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:15:33.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 00:15:33.561: INFO: Waiting up to 5m0s for pod "downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda" in namespace "downward-api-3888" to be "success or failure"
Aug 19 00:15:33.616: INFO: Pod "downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda": Phase="Pending", Reason="", readiness=false. Elapsed: 54.782862ms
Aug 19 00:15:35.623: INFO: Pod "downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062086671s
Aug 19 00:15:37.631: INFO: Pod "downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda": Phase="Running", Reason="", readiness=true. Elapsed: 4.069759787s
Aug 19 00:15:39.637: INFO: Pod "downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075862321s
STEP: Saw pod success
Aug 19 00:15:39.637: INFO: Pod "downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda" satisfied condition "success or failure"
Aug 19 00:15:39.641: INFO: Trying to get logs from node iruya-worker2 pod downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda container dapi-container: 
STEP: delete the pod
Aug 19 00:15:39.698: INFO: Waiting for pod downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda to disappear
Aug 19 00:15:39.798: INFO: Pod downward-api-740e0fd5-22ac-47f2-a5f7-2bd0e7516eda no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:15:39.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3888" for this suite.
Aug 19 00:15:45.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:15:46.020: INFO: namespace downward-api-3888 deletion completed in 6.213357146s

• [SLOW TEST:12.678 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:15:46.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:16:20.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-76" for this suite.
Aug 19 00:16:26.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:16:26.462: INFO: namespace container-runtime-76 deletion completed in 6.17756712s

• [SLOW TEST:40.441 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:16:26.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-762a22e3-3d07-4fe5-8506-0728b54d2494
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-762a22e3-3d07-4fe5-8506-0728b54d2494
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:17:59.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6296" for this suite.
Aug 19 00:18:21.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:18:21.460: INFO: namespace projected-6296 deletion completed in 22.165450558s

• [SLOW TEST:114.995 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:18:21.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:18:21.827: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 19 00:18:26.835: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 19 00:18:26.837: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 00:18:32.913: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5130,SelfLink:/apis/apps/v1/namespaces/deployment-5130/deployments/test-cleanup-deployment,UID:770eec75-74e1-4b12-8f9d-0e206a9d71fc,ResourceVersion:931384,Generation:1,CreationTimestamp:2020-08-19 00:18:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-19 00:18:26 +0000 UTC 2020-08-19 00:18:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-19 00:18:32 +0000 UTC 2020-08-19 00:18:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 19 00:18:32.924: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5130,SelfLink:/apis/apps/v1/namespaces/deployment-5130/replicasets/test-cleanup-deployment-55bbcbc84c,UID:3b0a33a8-e4b4-43a3-8f58-c0ace6b1fb2b,ResourceVersion:931372,Generation:1,CreationTimestamp:2020-08-19 00:18:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 770eec75-74e1-4b12-8f9d-0e206a9d71fc 0x4002d44ac7 0x4002d44ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 19 00:18:32.939: INFO: Pod "test-cleanup-deployment-55bbcbc84c-5nb6d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-5nb6d,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5130,SelfLink:/api/v1/namespaces/deployment-5130/pods/test-cleanup-deployment-55bbcbc84c-5nb6d,UID:8bd97e3b-f013-4a18-bc5d-838da856ffc8,ResourceVersion:931371,Generation:0,CreationTimestamp:2020-08-19 00:18:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 3b0a33a8-e4b4-43a3-8f58-c0ace6b1fb2b 0x4002d45097 0x4002d45098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mgwnz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgwnz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-mgwnz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002d45110} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002d45130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:18:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:18:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:18:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:18:26 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.3,StartTime:2020-08-19 00:18:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-19 00:18:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://1fbf9063806222071a0fd72b429d69aea024d5b0569bbfabf37e24dc31f0d7f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:18:32.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5130" for this suite.
Aug 19 00:18:41.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:18:41.192: INFO: namespace deployment-5130 deletion completed in 8.245343548s

• [SLOW TEST:19.731 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:18:41.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:18:41.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976" in namespace "projected-9467" to be "success or failure"
Aug 19 00:18:41.319: INFO: Pod "downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976": Phase="Pending", Reason="", readiness=false. Elapsed: 53.276953ms
Aug 19 00:18:43.326: INFO: Pod "downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059516752s
Aug 19 00:18:45.333: INFO: Pod "downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067037979s
STEP: Saw pod success
Aug 19 00:18:45.333: INFO: Pod "downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976" satisfied condition "success or failure"
Aug 19 00:18:45.339: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976 container client-container: 
STEP: delete the pod
Aug 19 00:18:45.380: INFO: Waiting for pod downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976 to disappear
Aug 19 00:18:45.384: INFO: Pod downwardapi-volume-f291ac5e-3783-47e6-a1af-0e864d124976 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:18:45.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9467" for this suite.
Aug 19 00:18:51.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:18:51.563: INFO: namespace projected-9467 deletion completed in 6.170847882s

• [SLOW TEST:10.370 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:18:51.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Aug 19 00:18:51.702: INFO: Waiting up to 5m0s for pod "client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b" in namespace "containers-5141" to be "success or failure"
Aug 19 00:18:51.713: INFO: Pod "client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.565569ms
Aug 19 00:18:53.720: INFO: Pod "client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017702828s
Aug 19 00:18:55.728: INFO: Pod "client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.025123124s
Aug 19 00:18:57.733: INFO: Pod "client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030949319s
STEP: Saw pod success
Aug 19 00:18:57.734: INFO: Pod "client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b" satisfied condition "success or failure"
Aug 19 00:18:57.737: INFO: Trying to get logs from node iruya-worker2 pod client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b container test-container: 
STEP: delete the pod
Aug 19 00:18:57.772: INFO: Waiting for pod client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b to disappear
Aug 19 00:18:57.785: INFO: Pod client-containers-0a188f50-028f-4ee6-9369-55de4e30ce9b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:18:57.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5141" for this suite.
Aug 19 00:19:03.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:19:04.022: INFO: namespace containers-5141 deletion completed in 6.228586221s

• [SLOW TEST:12.455 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:19:04.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3129.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3129.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 00:19:10.271: INFO: DNS probes using dns-3129/dns-test-03e963d9-b031-431d-9b93-335ab58e35c8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:19:10.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3129" for this suite.
Aug 19 00:19:16.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:19:16.610: INFO: namespace dns-3129 deletion completed in 6.220883686s

• [SLOW TEST:12.588 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:19:16.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9ee51338-bd34-41df-9dc2-56612f285a24
STEP: Creating a pod to test consume configMaps
Aug 19 00:19:16.717: INFO: Waiting up to 5m0s for pod "pod-configmaps-20454089-274d-430a-9297-bf42d7008c58" in namespace "configmap-3471" to be "success or failure"
Aug 19 00:19:16.747: INFO: Pod "pod-configmaps-20454089-274d-430a-9297-bf42d7008c58": Phase="Pending", Reason="", readiness=false. Elapsed: 29.124758ms
Aug 19 00:19:18.758: INFO: Pod "pod-configmaps-20454089-274d-430a-9297-bf42d7008c58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039868761s
Aug 19 00:19:20.765: INFO: Pod "pod-configmaps-20454089-274d-430a-9297-bf42d7008c58": Phase="Running", Reason="", readiness=true. Elapsed: 4.047044178s
Aug 19 00:19:22.772: INFO: Pod "pod-configmaps-20454089-274d-430a-9297-bf42d7008c58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054233684s
STEP: Saw pod success
Aug 19 00:19:22.772: INFO: Pod "pod-configmaps-20454089-274d-430a-9297-bf42d7008c58" satisfied condition "success or failure"
Aug 19 00:19:22.778: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-20454089-274d-430a-9297-bf42d7008c58 container configmap-volume-test: 
STEP: delete the pod
Aug 19 00:19:22.801: INFO: Waiting for pod pod-configmaps-20454089-274d-430a-9297-bf42d7008c58 to disappear
Aug 19 00:19:22.860: INFO: Pod pod-configmaps-20454089-274d-430a-9297-bf42d7008c58 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:19:22.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3471" for this suite.
Aug 19 00:19:28.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:19:29.126: INFO: namespace configmap-3471 deletion completed in 6.201257144s

• [SLOW TEST:12.510 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:19:29.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1709
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1709
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1709
Aug 19 00:19:29.278: INFO: Found 0 stateful pods, waiting for 1
Aug 19 00:19:39.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 19 00:19:39.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 00:19:43.415: INFO: stderr: "I0819 00:19:43.271994     988 log.go:172] (0x40007064d0) (0x40005c4820) Create stream\nI0819 00:19:43.273898     988 log.go:172] (0x40007064d0) (0x40005c4820) Stream added, broadcasting: 1\nI0819 00:19:43.286186     988 log.go:172] (0x40007064d0) Reply frame received for 1\nI0819 00:19:43.287419     988 log.go:172] (0x40007064d0) (0x40007280a0) Create stream\nI0819 00:19:43.287543     988 log.go:172] (0x40007064d0) (0x40007280a0) Stream added, broadcasting: 3\nI0819 00:19:43.289450     988 log.go:172] (0x40007064d0) Reply frame received for 3\nI0819 00:19:43.289770     988 log.go:172] (0x40007064d0) (0x40005c48c0) Create stream\nI0819 00:19:43.289848     988 log.go:172] (0x40007064d0) (0x40005c48c0) Stream added, broadcasting: 5\nI0819 00:19:43.291355     988 log.go:172] (0x40007064d0) Reply frame received for 5\nI0819 00:19:43.355409     988 log.go:172] (0x40007064d0) Data frame received for 5\nI0819 00:19:43.355752     988 log.go:172] (0x40005c48c0) (5) Data frame handling\nI0819 00:19:43.356564     988 log.go:172] (0x40005c48c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 00:19:43.389859     988 log.go:172] (0x40007064d0) Data frame received for 3\nI0819 00:19:43.390008     988 log.go:172] (0x40007280a0) (3) Data frame handling\nI0819 00:19:43.390186     988 log.go:172] (0x40007064d0) Data frame received for 5\nI0819 00:19:43.390368     988 log.go:172] (0x40005c48c0) (5) Data frame handling\nI0819 00:19:43.390473     988 log.go:172] (0x40007280a0) (3) Data frame sent\nI0819 00:19:43.390601     988 log.go:172] (0x40007064d0) Data frame received for 3\nI0819 00:19:43.390704     988 log.go:172] (0x40007280a0) (3) Data frame handling\nI0819 00:19:43.392258     988 log.go:172] (0x40007064d0) Data frame received for 1\nI0819 00:19:43.392464     988 log.go:172] (0x40005c4820) (1) Data frame handling\nI0819 00:19:43.392650     988 log.go:172] (0x40005c4820) (1) Data frame sent\nI0819 00:19:43.394086     988 log.go:172] (0x40007064d0) (0x40005c4820) Stream removed, broadcasting: 1\nI0819 00:19:43.397981     988 log.go:172] (0x40007064d0) Go away received\nI0819 00:19:43.399926     988 log.go:172] (0x40007064d0) (0x40005c4820) Stream removed, broadcasting: 1\nI0819 00:19:43.401003     988 log.go:172] (0x40007064d0) (0x40007280a0) Stream removed, broadcasting: 3\nI0819 00:19:43.401643     988 log.go:172] (0x40007064d0) (0x40005c48c0) Stream removed, broadcasting: 5\n"
Aug 19 00:19:43.416: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 00:19:43.416: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 00:19:43.422: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 19 00:19:53.513: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 00:19:53.514: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 00:19:53.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999996322s
Aug 19 00:19:54.829: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.722386304s
Aug 19 00:19:55.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.699840686s
Aug 19 00:19:56.890: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.69328153s
Aug 19 00:19:57.912: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.638701911s
Aug 19 00:19:58.921: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.616656592s
Aug 19 00:19:59.930: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.60729143s
Aug 19 00:20:00.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.598339566s
Aug 19 00:20:01.946: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.591199636s
Aug 19 00:20:02.974: INFO: Verifying statefulset ss doesn't scale past 1 for another 582.973885ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1709
Aug 19 00:20:03.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:20:05.476: INFO: stderr: "I0819 00:20:05.340800    1025 log.go:172] (0x40006b8840) (0x4000642960) Create stream\nI0819 00:20:05.344981    1025 log.go:172] (0x40006b8840) (0x4000642960) Stream added, broadcasting: 1\nI0819 00:20:05.359256    1025 log.go:172] (0x40006b8840) Reply frame received for 1\nI0819 00:20:05.359930    1025 log.go:172] (0x40006b8840) (0x40006421e0) Create stream\nI0819 00:20:05.360012    1025 log.go:172] (0x40006b8840) (0x40006421e0) Stream added, broadcasting: 3\nI0819 00:20:05.361308    1025 log.go:172] (0x40006b8840) Reply frame received for 3\nI0819 00:20:05.361568    1025 log.go:172] (0x40006b8840) (0x4000642280) Create stream\nI0819 00:20:05.361636    1025 log.go:172] (0x40006b8840) (0x4000642280) Stream added, broadcasting: 5\nI0819 00:20:05.362758    1025 log.go:172] (0x40006b8840) Reply frame received for 5\nI0819 00:20:05.455001    1025 log.go:172] (0x40006b8840) Data frame received for 5\nI0819 00:20:05.455298    1025 log.go:172] (0x40006b8840) Data frame received for 3\nI0819 00:20:05.455436    1025 log.go:172] (0x40006421e0) (3) Data frame handling\nI0819 00:20:05.455656    1025 log.go:172] (0x40006b8840) Data frame received for 1\nI0819 00:20:05.455803    1025 log.go:172] (0x4000642960) (1) Data frame handling\nI0819 00:20:05.456012    1025 log.go:172] (0x4000642280) (5) Data frame handling\nI0819 00:20:05.457598    1025 log.go:172] (0x4000642960) (1) Data frame sent\nI0819 00:20:05.457926    1025 log.go:172] (0x4000642280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 00:20:05.460129    1025 log.go:172] (0x40006421e0) (3) Data frame sent\nI0819 00:20:05.460243    1025 log.go:172] (0x40006b8840) Data frame received for 3\nI0819 00:20:05.460336    1025 log.go:172] (0x40006421e0) (3) Data frame handling\nI0819 00:20:05.460543    1025 log.go:172] (0x40006b8840) Data frame received for 5\nI0819 00:20:05.460623    1025 log.go:172] (0x4000642280) (5) Data frame handling\nI0819 00:20:05.461735    1025 log.go:172] (0x40006b8840) (0x4000642960) Stream removed, broadcasting: 1\nI0819 00:20:05.462021    1025 log.go:172] (0x40006b8840) Go away received\nI0819 00:20:05.465313    1025 log.go:172] (0x40006b8840) (0x4000642960) Stream removed, broadcasting: 1\nI0819 00:20:05.465615    1025 log.go:172] (0x40006b8840) (0x40006421e0) Stream removed, broadcasting: 3\nI0819 00:20:05.465805    1025 log.go:172] (0x40006b8840) (0x4000642280) Stream removed, broadcasting: 5\n"
Aug 19 00:20:05.477: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 00:20:05.477: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 00:20:05.484: INFO: Found 1 stateful pods, waiting for 3
Aug 19 00:20:15.544: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:20:15.544: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 00:20:15.544: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 19 00:20:15.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 00:20:17.032: INFO: stderr: "I0819 00:20:16.912836    1048 log.go:172] (0x4000866630) (0x40008c4820) Create stream\nI0819 00:20:16.918040    1048 log.go:172] (0x4000866630) (0x40008c4820) Stream added, broadcasting: 1\nI0819 00:20:16.935446    1048 log.go:172] (0x4000866630) Reply frame received for 1\nI0819 00:20:16.935946    1048 log.go:172] (0x4000866630) (0x40008d2000) Create stream\nI0819 00:20:16.936008    1048 log.go:172] (0x4000866630) (0x40008d2000) Stream added, broadcasting: 3\nI0819 00:20:16.937345    1048 log.go:172] (0x4000866630) Reply frame received for 3\nI0819 00:20:16.937613    1048 log.go:172] (0x4000866630) (0x40008c4000) Create stream\nI0819 00:20:16.937696    1048 log.go:172] (0x4000866630) (0x40008c4000) Stream added, broadcasting: 5\nI0819 00:20:16.938757    1048 log.go:172] (0x4000866630) Reply frame received for 5\nI0819 00:20:17.010181    1048 log.go:172] (0x4000866630) Data frame received for 5\nI0819 00:20:17.010467    1048 log.go:172] (0x4000866630) Data frame received for 1\nI0819 00:20:17.010615    1048 log.go:172] (0x40008c4000) (5) Data frame handling\nI0819 00:20:17.010771    1048 log.go:172] (0x4000866630) Data frame received for 3\nI0819 00:20:17.010863    1048 log.go:172] (0x40008d2000) (3) Data frame handling\nI0819 00:20:17.010982    1048 log.go:172] (0x40008c4820) (1) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 00:20:17.012510    1048 log.go:172] (0x40008c4820) (1) Data frame sent\nI0819 00:20:17.012622    1048 log.go:172] (0x40008d2000) (3) Data frame sent\nI0819 00:20:17.013156    1048 log.go:172] (0x40008c4000) (5) Data frame sent\nI0819 00:20:17.013335    1048 log.go:172] (0x4000866630) Data frame received for 5\nI0819 00:20:17.013454    1048 log.go:172] (0x40008c4000) (5) Data frame handling\nI0819 00:20:17.013551    1048 log.go:172] (0x4000866630) Data frame received for 3\nI0819 00:20:17.013660    1048 log.go:172] (0x40008d2000) (3) Data frame handling\nI0819 00:20:17.014300    1048 log.go:172] (0x4000866630) (0x40008c4820) Stream removed, broadcasting: 1\nI0819 00:20:17.018068    1048 log.go:172] (0x4000866630) Go away received\nI0819 00:20:17.020697    1048 log.go:172] (0x4000866630) (0x40008c4820) Stream removed, broadcasting: 1\nI0819 00:20:17.021055    1048 log.go:172] (0x4000866630) (0x40008d2000) Stream removed, broadcasting: 3\nI0819 00:20:17.021262    1048 log.go:172] (0x4000866630) (0x40008c4000) Stream removed, broadcasting: 5\n"
Aug 19 00:20:17.033: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 00:20:17.033: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 00:20:17.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 00:20:18.542: INFO: stderr: "I0819 00:20:18.404171    1071 log.go:172] (0x4000142fd0) (0x40005346e0) Create stream\nI0819 00:20:18.406427    1071 log.go:172] (0x4000142fd0) (0x40005346e0) Stream added, broadcasting: 1\nI0819 00:20:18.414706    1071 log.go:172] (0x4000142fd0) Reply frame received for 1\nI0819 00:20:18.415213    1071 log.go:172] (0x4000142fd0) (0x40008cc000) Create stream\nI0819 00:20:18.415284    1071 log.go:172] (0x4000142fd0) (0x40008cc000) Stream added, broadcasting: 3\nI0819 00:20:18.416834    1071 log.go:172] (0x4000142fd0) Reply frame received for 3\nI0819 00:20:18.417058    1071 log.go:172] (0x4000142fd0) (0x40007fc000) Create stream\nI0819 00:20:18.417122    1071 log.go:172] (0x4000142fd0) (0x40007fc000) Stream added, broadcasting: 5\nI0819 00:20:18.418425    1071 log.go:172] (0x4000142fd0) Reply frame received for 5\nI0819 00:20:18.482262    1071 log.go:172] (0x4000142fd0) Data frame received for 5\nI0819 00:20:18.482562    1071 log.go:172] (0x40007fc000) (5) Data frame handling\nI0819 00:20:18.483203    1071 log.go:172] (0x40007fc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 00:20:18.513279    1071 log.go:172] (0x4000142fd0) Data frame received for 3\nI0819 00:20:18.513365    1071 log.go:172] (0x40008cc000) (3) Data frame handling\nI0819 00:20:18.513465    1071 log.go:172] (0x40008cc000) (3) Data frame sent\nI0819 00:20:18.513547    1071 log.go:172] (0x4000142fd0) Data frame received for 3\nI0819 00:20:18.513616    1071 log.go:172] (0x40008cc000) (3) Data frame handling\nI0819 00:20:18.514080    1071 log.go:172] (0x4000142fd0) Data frame received for 5\nI0819 00:20:18.514369    1071 log.go:172] (0x40007fc000) (5) Data frame handling\nI0819 00:20:18.517889    1071 log.go:172] (0x4000142fd0) Data frame received for 1\nI0819 00:20:18.517957    1071 log.go:172] (0x40005346e0) (1) Data frame handling\nI0819 00:20:18.518035    1071 log.go:172] (0x40005346e0) (1) Data frame sent\nI0819 00:20:18.519145    1071 log.go:172] (0x4000142fd0) (0x40005346e0) Stream removed, broadcasting: 1\nI0819 00:20:18.522614    1071 log.go:172] (0x4000142fd0) Go away received\nI0819 00:20:18.532111    1071 log.go:172] (0x4000142fd0) (0x40005346e0) Stream removed, broadcasting: 1\nI0819 00:20:18.532408    1071 log.go:172] (0x4000142fd0) (0x40008cc000) Stream removed, broadcasting: 3\nI0819 00:20:18.533444    1071 log.go:172] (0x4000142fd0) (0x40007fc000) Stream removed, broadcasting: 5\n"
Aug 19 00:20:18.543: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 00:20:18.543: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 00:20:18.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 00:20:20.037: INFO: stderr: "I0819 00:20:19.880478    1095 log.go:172] (0x40005c4420) (0x4000912960) Create stream\nI0819 00:20:19.886840    1095 log.go:172] (0x40005c4420) (0x4000912960) Stream added, broadcasting: 1\nI0819 00:20:19.903733    1095 log.go:172] (0x40005c4420) Reply frame received for 1\nI0819 00:20:19.904277    1095 log.go:172] (0x40005c4420) (0x40001e5c20) Create stream\nI0819 00:20:19.904347    1095 log.go:172] (0x40005c4420) (0x40001e5c20) Stream added, broadcasting: 3\nI0819 00:20:19.905827    1095 log.go:172] (0x40005c4420) Reply frame received for 3\nI0819 00:20:19.906152    1095 log.go:172] (0x40005c4420) (0x400083a0a0) Create stream\nI0819 00:20:19.906248    1095 log.go:172] (0x40005c4420) (0x400083a0a0) Stream added, broadcasting: 5\nI0819 00:20:19.907666    1095 log.go:172] (0x40005c4420) Reply frame received for 5\nI0819 00:20:19.988910    1095 log.go:172] (0x40005c4420) Data frame received for 5\nI0819 00:20:19.989189    1095 log.go:172] (0x400083a0a0) (5) Data frame handling\nI0819 00:20:19.989807    1095 log.go:172] (0x400083a0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 00:20:20.016012    1095 log.go:172] (0x40005c4420) Data frame received for 3\nI0819 00:20:20.016183    1095 log.go:172] (0x40005c4420) Data frame received for 5\nI0819 00:20:20.016334    1095 log.go:172] (0x400083a0a0) (5) Data frame handling\nI0819 00:20:20.016459    1095 log.go:172] (0x40001e5c20) (3) Data frame handling\nI0819 00:20:20.016622    1095 log.go:172] (0x40001e5c20) (3) Data frame sent\nI0819 00:20:20.016933    1095 log.go:172] (0x40005c4420) Data frame received for 3\nI0819 00:20:20.017062    1095 log.go:172] (0x40001e5c20) (3) Data frame handling\nI0819 00:20:20.018369    1095 log.go:172] (0x40005c4420) Data frame received for 1\nI0819 00:20:20.018502    1095 log.go:172] (0x4000912960) (1) Data frame handling\nI0819 00:20:20.018578    1095 log.go:172] (0x4000912960) (1) Data frame sent\nI0819 00:20:20.019259    1095 log.go:172] (0x40005c4420) (0x4000912960) Stream removed, broadcasting: 1\nI0819 00:20:20.022300    1095 log.go:172] (0x40005c4420) Go away received\nI0819 00:20:20.025275    1095 log.go:172] (0x40005c4420) (0x4000912960) Stream removed, broadcasting: 1\nI0819 00:20:20.025856    1095 log.go:172] (0x40005c4420) (0x40001e5c20) Stream removed, broadcasting: 3\nI0819 00:20:20.026108    1095 log.go:172] (0x40005c4420) (0x400083a0a0) Stream removed, broadcasting: 5\n"
Aug 19 00:20:20.037: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 00:20:20.037: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 00:20:20.038: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 00:20:20.043: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 19 00:20:30.058: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 00:20:30.058: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 00:20:30.058: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 19 00:20:30.078: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999992854s
Aug 19 00:20:31.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992008423s
Aug 19 00:20:32.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983095861s
Aug 19 00:20:33.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950566872s
Aug 19 00:20:34.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.940558528s
Aug 19 00:20:35.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.930423665s
Aug 19 00:20:36.158: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.919869763s
Aug 19 00:20:37.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.91166322s
Aug 19 00:20:38.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.903719023s
Aug 19 00:20:39.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 893.558695ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1709
Aug 19 00:20:40.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:20:41.757: INFO: stderr: "I0819 00:20:41.652207    1118 log.go:172] (0x40006ce420) (0x40003fe640) Create stream\nI0819 00:20:41.658281    1118 log.go:172] (0x40006ce420) (0x40003fe640) Stream added, broadcasting: 1\nI0819 00:20:41.670516    1118 log.go:172] (0x40006ce420) Reply frame received for 1\nI0819 00:20:41.671067    1118 log.go:172] (0x40006ce420) (0x40003fe6e0) Create stream\nI0819 00:20:41.671150    1118 log.go:172] (0x40006ce420) (0x40003fe6e0) Stream added, broadcasting: 3\nI0819 00:20:41.672869    1118 log.go:172] (0x40006ce420) Reply frame received for 3\nI0819 00:20:41.673227    1118 log.go:172] (0x40006ce420) (0x4000762000) Create stream\nI0819 00:20:41.673307    1118 log.go:172] (0x40006ce420) (0x4000762000) Stream added, broadcasting: 5\nI0819 00:20:41.674663    1118 log.go:172] (0x40006ce420) Reply frame received for 5\nI0819 00:20:41.736037    1118 log.go:172] (0x40006ce420) Data frame received for 3\nI0819 00:20:41.736422    1118 log.go:172] (0x40006ce420) Data frame received for 1\nI0819 00:20:41.736858    1118 log.go:172] (0x40006ce420) Data frame received for 5\nI0819 00:20:41.737007    1118 log.go:172] (0x4000762000) (5) Data frame handling\nI0819 00:20:41.737136    1118 log.go:172] (0x40003fe6e0) (3) Data frame handling\nI0819 00:20:41.737369    1118 log.go:172] (0x40003fe640) (1) Data frame handling\nI0819 00:20:41.738626    1118 log.go:172] (0x4000762000) (5) Data frame sent\nI0819 00:20:41.738740    1118 log.go:172] (0x40003fe6e0) (3) Data frame sent\nI0819 00:20:41.739057    1118 log.go:172] (0x40003fe640) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 00:20:41.740048    1118 log.go:172] (0x40006ce420) Data frame received for 5\nI0819 00:20:41.740124    1118 log.go:172] (0x4000762000) (5) Data frame handling\nI0819 00:20:41.740478    1118 log.go:172] (0x40006ce420) Data frame received for 3\nI0819 00:20:41.740620    1118 log.go:172] (0x40003fe6e0) (3) Data frame handling\nI0819 00:20:41.742812    1118 log.go:172] (0x40006ce420) (0x40003fe640) Stream removed, broadcasting: 1\nI0819 00:20:41.743462    1118 log.go:172] (0x40006ce420) Go away received\nI0819 00:20:41.746573    1118 log.go:172] (0x40006ce420) (0x40003fe640) Stream removed, broadcasting: 1\nI0819 00:20:41.746882    1118 log.go:172] (0x40006ce420) (0x40003fe6e0) Stream removed, broadcasting: 3\nI0819 00:20:41.747088    1118 log.go:172] (0x40006ce420) (0x4000762000) Stream removed, broadcasting: 5\n"
Aug 19 00:20:41.758: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 00:20:41.758: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 00:20:41.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:20:43.235: INFO: stderr: "I0819 00:20:43.115157    1142 log.go:172] (0x40007a6790) (0x40008c2820) Create stream\nI0819 00:20:43.119184    1142 log.go:172] (0x40007a6790) (0x40008c2820) Stream added, broadcasting: 1\nI0819 00:20:43.136708    1142 log.go:172] (0x40007a6790) Reply frame received for 1\nI0819 00:20:43.137441    1142 log.go:172] (0x40007a6790) (0x400080e000) Create stream\nI0819 00:20:43.137512    1142 log.go:172] (0x40007a6790) (0x400080e000) Stream added, broadcasting: 3\nI0819 00:20:43.139312    1142 log.go:172] (0x40007a6790) Reply frame received for 3\nI0819 00:20:43.139950    1142 log.go:172] (0x40007a6790) (0x40008c2000) Create stream\nI0819 00:20:43.140097    1142 log.go:172] (0x40007a6790) (0x40008c2000) Stream added, broadcasting: 5\nI0819 00:20:43.142009    1142 log.go:172] (0x40007a6790) Reply frame received for 5\nI0819 00:20:43.214079    1142 log.go:172] (0x40007a6790) Data frame received for 3\nI0819 00:20:43.214461    1142 log.go:172] (0x40007a6790) Data frame received for 5\nI0819 00:20:43.214745    1142 log.go:172] (0x40007a6790) Data frame received for 1\nI0819 00:20:43.214899    1142 log.go:172] (0x40008c2820) (1) Data frame handling\nI0819 00:20:43.215043    1142 log.go:172] (0x400080e000) (3) Data frame handling\nI0819 00:20:43.215278    1142 log.go:172] (0x40008c2000) (5) Data frame handling\nI0819 00:20:43.216525    1142 log.go:172] (0x40008c2820) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 00:20:43.217372    1142 log.go:172] (0x400080e000) (3) Data frame sent\nI0819 00:20:43.217534    1142 log.go:172] (0x40007a6790) Data frame received for 3\nI0819 00:20:43.217887    1142 log.go:172] (0x40008c2000) (5) Data frame sent\nI0819 00:20:43.218004    1142 log.go:172] (0x40007a6790) Data frame received for 5\nI0819 00:20:43.218123    1142 log.go:172] (0x40007a6790) (0x40008c2820) Stream removed, broadcasting: 1\nI0819 00:20:43.218742    1142 log.go:172] (0x40008c2000) (5) Data frame handling\nI0819 00:20:43.218963    1142 log.go:172] (0x400080e000) (3) Data frame handling\nI0819 00:20:43.222850    1142 log.go:172] (0x40007a6790) Go away received\nI0819 00:20:43.225401    1142 log.go:172] (0x40007a6790) (0x40008c2820) Stream removed, broadcasting: 1\nI0819 00:20:43.226179    1142 log.go:172] (0x40007a6790) (0x400080e000) Stream removed, broadcasting: 3\nI0819 00:20:43.226436    1142 log.go:172] (0x40007a6790) (0x40008c2000) Stream removed, broadcasting: 5\n"
Aug 19 00:20:43.237: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 00:20:43.237: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 00:20:43.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:20:44.646: INFO: rc: 1
Aug 19 00:20:44.648: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0x4002973470 exit status 1   true [0x4000737fe8 0x4000010120 0x40000102e8] [0x4000737fe8 0x4000010120 0x40000102e8] [0x4000010110 0x40000101b0] [0xad5158 0xad5158] 0x40026a90e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Aug 19 00:20:54.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:20:55.891: INFO: rc: 1
Aug 19 00:20:55.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002b0d7a0 exit status 1   true [0x40001af9f8 0x40001afad8 0x40001afc30] [0x40001af9f8 0x40001afad8 0x40001afc30] [0x40001afac0 0x40001afbb0] [0xad5158 0xad5158] 0x4002c1cea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:21:05.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:21:07.152: INFO: rc: 1
Aug 19 00:21:07.153: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002973530 exit status 1   true [0x4000010340 0x4000010448 0x4000010570] [0x4000010340 0x4000010448 0x4000010570] [0x40000103e8 0x4000010508] [0xad5158 0xad5158] 0x40026a94a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:21:17.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:21:18.414: INFO: rc: 1
Aug 19 00:21:18.414: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002b0d860 exit status 1   true [0x40001afca8 0x40001afec8 0x40001aff58] [0x40001afca8 0x40001afec8 0x40001aff58] [0x40001afdf8 0x40001aff28] [0xad5158 0xad5158] 0x4002c1d200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:21:28.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:21:29.682: INFO: rc: 1
Aug 19 00:21:29.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002973620 exit status 1   true [0x40000105e8 0x40000107f8 0x40000108f8] [0x40000105e8 0x40000107f8 0x40000108f8] [0x40000106d0 0x40000108b8] [0xad5158 0xad5158] 0x40026a9800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:21:39.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:21:40.942: INFO: rc: 1
Aug 19 00:21:40.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40018860f0 exit status 1   true [0x4001be0a60 0x4001be0b10 0x4001be0ba8] [0x4001be0a60 0x4001be0b10 0x4001be0ba8] [0x4001be0ae8 0x4001be0b40] [0xad5158 0xad5158] 0x4002cc8d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:21:50.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:21:52.203: INFO: rc: 1
Aug 19 00:21:52.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40029736e0 exit status 1   true [0x4000010918 0x4000010f20 0x40000110a8] [0x4000010918 0x4000010f20 0x40000110a8] [0x4000010e88 0x4000011060] [0xad5158 0xad5158] 0x40013283c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:22:02.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:22:03.475: INFO: rc: 1
Aug 19 00:22:03.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002b34090 exit status 1   true [0x4000736ea0 0x4000737350 0x4000737640] [0x4000736ea0 0x4000737350 0x4000737640] [0x4000737228 0x4000737460] [0xad5158 0xad5158] 0x40026a9200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:22:13.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:22:14.741: INFO: rc: 1
Aug 19 00:22:14.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40019ae0f0 exit status 1   true [0x40000100a8 0x4000010150 0x4000010340] [0x40000100a8 0x4000010150 0x4000010340] [0x4000010120 0x40000102e8] [0xad5158 0xad5158] 0x400283a9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:22:24.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:22:25.979: INFO: rc: 1
Aug 19 00:22:25.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40019ae1b0 exit status 1   true [0x40000103b8 0x4000010488 0x40000105e8] [0x40000103b8 0x4000010488 0x40000105e8] [0x4000010448 0x4000010570] [0xad5158 0xad5158] 0x400283b320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:22:35.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:22:37.220: INFO: rc: 1
Aug 19 00:22:37.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400249c0c0 exit status 1   true [0x4001be0000 0x4001be0078 0x4001be0118] [0x4001be0000 0x4001be0078 0x4001be0118] [0x4001be0018 0x4001be00d0] [0xad5158 0xad5158] 0x40024bb440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:22:47.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:22:48.461: INFO: rc: 1
Aug 19 00:22:48.461: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40019ae270 exit status 1   true [0x4000010668 0x4000010848 0x4000010918] [0x4000010668 0x4000010848 0x4000010918] [0x40000107f8 0x40000108f8] [0xad5158 0xad5158] 0x400283b920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:22:58.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:22:59.729: INFO: rc: 1
Aug 19 00:22:59.729: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002b341b0 exit status 1   true [0x4000737698 0x40007378b8 0x4000737ac8] [0x4000737698 0x40007378b8 0x4000737ac8] [0x4000737820 0x4000737a98] [0xad5158 0xad5158] 0x40026a9560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:23:09.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:23:10.974: INFO: rc: 1
Aug 19 00:23:10.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400249c1e0 exit status 1   true [0x4001be0140 0x4001be0178 0x4001be0210] [0x4001be0140 0x4001be0178 0x4001be0210] [0x4001be0168 0x4001be0200] [0xad5158 0xad5158] 0x40026841e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:23:20.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:23:22.257: INFO: rc: 1
Aug 19 00:23:22.258: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400249c2a0 exit status 1   true [0x4001be0258 0x4001be02e8 0x4001be0338] [0x4001be0258 0x4001be02e8 0x4001be0338] [0x4001be02c8 0x4001be0300] [0xad5158 0xad5158] 0x4002684540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:23:32.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:23:33.536: INFO: rc: 1
Aug 19 00:23:33.536: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40024834a0 exit status 1   true [0x400243e068 0x400243e130 0x400243e208] [0x400243e068 0x400243e130 0x400243e208] [0x400243e0c8 0x400243e1d0] [0xad5158 0xad5158] 0x40013288a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:23:43.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:23:44.765: INFO: rc: 1
Aug 19 00:23:44.765: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40019ae360 exit status 1   true [0x40000109c8 0x4000010f70 0x40000110c8] [0x40000109c8 0x4000010f70 0x40000110c8] [0x4000010f20 0x40000110a8] [0xad5158 0xad5158] 0x400283bc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:23:54.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:23:56.044: INFO: rc: 1
Aug 19 00:23:56.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002483590 exit status 1   true [0x400243e2a0 0x400243e2f8 0x400243e370] [0x400243e2a0 0x400243e2f8 0x400243e370] [0x400243e2e0 0x400243e330] [0xad5158 0xad5158] 0x4001328c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:24:06.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:24:07.333: INFO: rc: 1
Aug 19 00:24:07.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40019ae120 exit status 1   true [0x4000010110 0x40000101b0 0x40000103b8] [0x4000010110 0x40000101b0 0x40000103b8] [0x4000010150 0x4000010340] [0xad5158 0xad5158] 0x40024bb440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:24:17.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:24:18.603: INFO: rc: 1
Aug 19 00:24:18.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400249c090 exit status 1   true [0x4001be0000 0x4001be0078 0x4001be0118] [0x4001be0000 0x4001be0078 0x4001be0118] [0x4001be0018 0x4001be00d0] [0xad5158 0xad5158] 0x400283a9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:24:28.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:24:30.209: INFO: rc: 1
Aug 19 00:24:30.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002483440 exit status 1   true [0x400243e068 0x400243e130 0x400243e208] [0x400243e068 0x400243e130 0x400243e208] [0x400243e0c8 0x400243e1d0] [0xad5158 0xad5158] 0x4002684300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:24:40.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:24:41.591: INFO: rc: 1
Aug 19 00:24:41.591: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002b340f0 exit status 1   true [0x4000736e50 0x4000737228 0x4000737460] [0x4000736e50 0x4000737228 0x4000737460] [0x4000736f98 0x40007373c0] [0xad5158 0xad5158] 0x40013288a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:24:51.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:24:52.896: INFO: rc: 1
Aug 19 00:24:52.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002b341e0 exit status 1   true [0x4000737640 0x4000737820 0x4000737a98] [0x4000737640 0x4000737820 0x4000737a98] [0x4000737750 0x4000737a50] [0xad5158 0xad5158] 0x4001328c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:25:02.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:25:04.139: INFO: rc: 1
Aug 19 00:25:04.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400249c210 exit status 1   true [0x4001be0140 0x4001be0178 0x4001be0210] [0x4001be0140 0x4001be0178 0x4001be0210] [0x4001be0168 0x4001be0200] [0xad5158 0xad5158] 0x400283b320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:25:14.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:25:15.406: INFO: rc: 1
Aug 19 00:25:15.406: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x4002b342a0 exit status 1   true [0x4000737ac8 0x4000737c28 0x4000737d30] [0x4000737ac8 0x4000737c28 0x4000737d30] [0x4000737bb0 0x4000737cf0] [0xad5158 0xad5158] 0x40013290e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:25:25.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:25:26.681: INFO: rc: 1
Aug 19 00:25:26.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x400249c330 exit status 1   true [0x4001be0258 0x4001be02e8 0x4001be0338] [0x4001be0258 0x4001be02e8 0x4001be0338] [0x4001be02c8 0x4001be0300] [0xad5158 0xad5158] 0x400283b920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:25:36.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:25:37.938: INFO: rc: 1
Aug 19 00:25:37.938: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x40019ae240 exit status 1   true [0x40000103e8 0x4000010508 0x4000010668] [0x40000103e8 0x4000010508 0x4000010668] [0x4000010488 0x40000105e8] [0xad5158 0xad5158] 0x40026a90e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 19 00:25:47.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 00:25:49.184: INFO: rc: 1
Aug 19 00:25:49.185: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Aug 19 00:25:49.185: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 00:25:49.228: INFO: Deleting all statefulset in ns statefulset-1709
Aug 19 00:25:49.233: INFO: Scaling statefulset ss to 0
Aug 19 00:25:49.265: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 00:25:49.269: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:25:49.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1709" for this suite.
Aug 19 00:25:55.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:25:55.445: INFO: namespace statefulset-1709 deletion completed in 6.15169961s

• [SLOW TEST:386.317 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:25:55.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 19 00:25:56.812: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932583,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 00:25:56.815: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932583,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 19 00:26:06.854: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932605,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 19 00:26:06.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932605,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 19 00:26:16.945: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932625,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 00:26:16.946: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932625,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 19 00:26:27.013: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932646,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 00:26:27.013: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-a,UID:deb43330-989e-49ce-a35e-5c43cb920251,ResourceVersion:932646,Generation:0,CreationTimestamp:2020-08-19 00:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 19 00:26:37.025: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-b,UID:a2b8c6f1-e62f-4ca8-96e3-e36b8ea01e65,ResourceVersion:932666,Generation:0,CreationTimestamp:2020-08-19 00:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 00:26:37.026: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-b,UID:a2b8c6f1-e62f-4ca8-96e3-e36b8ea01e65,ResourceVersion:932666,Generation:0,CreationTimestamp:2020-08-19 00:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 19 00:26:47.037: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-b,UID:a2b8c6f1-e62f-4ca8-96e3-e36b8ea01e65,ResourceVersion:932685,Generation:0,CreationTimestamp:2020-08-19 00:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 00:26:47.038: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4022,SelfLink:/api/v1/namespaces/watch-4022/configmaps/e2e-watch-test-configmap-b,UID:a2b8c6f1-e62f-4ca8-96e3-e36b8ea01e65,ResourceVersion:932685,Generation:0,CreationTimestamp:2020-08-19 00:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:26:57.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4022" for this suite.
Aug 19 00:27:05.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:27:05.217: INFO: namespace watch-4022 deletion completed in 8.163246059s

• [SLOW TEST:69.771 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:27:05.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:27:05.336: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 19 00:27:05.367: INFO: Number of nodes with available pods: 0
Aug 19 00:27:05.368: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 19 00:27:05.459: INFO: Number of nodes with available pods: 0
Aug 19 00:27:05.459: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:06.468: INFO: Number of nodes with available pods: 0
Aug 19 00:27:06.468: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:07.466: INFO: Number of nodes with available pods: 0
Aug 19 00:27:07.466: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:08.471: INFO: Number of nodes with available pods: 0
Aug 19 00:27:08.471: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:09.514: INFO: Number of nodes with available pods: 0
Aug 19 00:27:09.514: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:10.466: INFO: Number of nodes with available pods: 0
Aug 19 00:27:10.467: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:11.467: INFO: Number of nodes with available pods: 1
Aug 19 00:27:11.467: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 19 00:27:11.513: INFO: Number of nodes with available pods: 1
Aug 19 00:27:11.513: INFO: Number of running nodes: 0, number of available pods: 1
Aug 19 00:27:12.520: INFO: Number of nodes with available pods: 0
Aug 19 00:27:12.520: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 19 00:27:12.541: INFO: Number of nodes with available pods: 0
Aug 19 00:27:12.541: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:13.547: INFO: Number of nodes with available pods: 0
Aug 19 00:27:13.547: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:14.845: INFO: Number of nodes with available pods: 0
Aug 19 00:27:14.846: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:15.547: INFO: Number of nodes with available pods: 0
Aug 19 00:27:15.547: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:16.549: INFO: Number of nodes with available pods: 0
Aug 19 00:27:16.549: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:17.548: INFO: Number of nodes with available pods: 0
Aug 19 00:27:17.548: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:18.632: INFO: Number of nodes with available pods: 0
Aug 19 00:27:18.632: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:19.548: INFO: Number of nodes with available pods: 0
Aug 19 00:27:19.548: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:27:20.597: INFO: Number of nodes with available pods: 1
Aug 19 00:27:20.597: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2127, will wait for the garbage collector to delete the pods
Aug 19 00:27:20.677: INFO: Deleting DaemonSet.extensions daemon-set took: 9.501574ms
Aug 19 00:27:20.978: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.829741ms
Aug 19 00:27:33.711: INFO: Number of nodes with available pods: 0
Aug 19 00:27:33.711: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 00:27:33.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2127/daemonsets","resourceVersion":"932832"},"items":null}

Aug 19 00:27:33.760: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2127/pods","resourceVersion":"932832"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:27:33.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2127" for this suite.
Aug 19 00:27:39.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:27:40.083: INFO: namespace daemonsets-2127 deletion completed in 6.181318408s

• [SLOW TEST:34.863 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:27:40.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:27:40.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7" in namespace "downward-api-3814" to be "success or failure"
Aug 19 00:27:40.461: INFO: Pod "downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7": Phase="Pending", Reason="", readiness=false. Elapsed: 139.035373ms
Aug 19 00:27:42.469: INFO: Pod "downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147055041s
Aug 19 00:27:44.496: INFO: Pod "downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174568146s
Aug 19 00:27:46.502: INFO: Pod "downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180749718s
STEP: Saw pod success
Aug 19 00:27:46.502: INFO: Pod "downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7" satisfied condition "success or failure"
Aug 19 00:27:46.506: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7 container client-container: 
STEP: delete the pod
Aug 19 00:27:46.531: INFO: Waiting for pod downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7 to disappear
Aug 19 00:27:46.589: INFO: Pod downwardapi-volume-078954ee-bb1e-4293-a096-427c87cd84d7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:27:46.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3814" for this suite.
Aug 19 00:27:53.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:27:54.108: INFO: namespace downward-api-3814 deletion completed in 7.414905571s

• [SLOW TEST:14.024 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:27:54.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 00:27:54.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1867'
Aug 19 00:27:55.908: INFO: stderr: ""
Aug 19 00:27:55.908: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 19 00:27:55.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1867'
Aug 19 00:28:01.044: INFO: stderr: ""
Aug 19 00:28:01.044: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:28:01.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1867" for this suite.
Aug 19 00:28:07.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:28:07.283: INFO: namespace kubectl-1867 deletion completed in 6.230199056s

• [SLOW TEST:13.173 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:28:07.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0819 00:28:21.189179       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 00:28:21.190: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:28:21.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9206" for this suite.
Aug 19 00:28:37.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:28:37.917: INFO: namespace gc-9206 deletion completed in 16.420937177s

• [SLOW TEST:30.633 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:28:37.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 00:28:37.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8138'
Aug 19 00:28:39.364: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 00:28:39.364: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Aug 19 00:28:39.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8138'
Aug 19 00:28:40.903: INFO: stderr: ""
Aug 19 00:28:40.903: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:28:40.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8138" for this suite.
Aug 19 00:28:47.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:28:47.353: INFO: namespace kubectl-8138 deletion completed in 6.435084821s

• [SLOW TEST:9.435 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:28:47.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 19 00:28:48.135: INFO: Pod name wrapped-volume-race-620c0f06-dd4d-4b95-b9f9-6b3e3c2fde55: Found 0 pods out of 5
Aug 19 00:28:53.158: INFO: Pod name wrapped-volume-race-620c0f06-dd4d-4b95-b9f9-6b3e3c2fde55: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-620c0f06-dd4d-4b95-b9f9-6b3e3c2fde55 in namespace emptydir-wrapper-9731, will wait for the garbage collector to delete the pods
Aug 19 00:29:09.591: INFO: Deleting ReplicationController wrapped-volume-race-620c0f06-dd4d-4b95-b9f9-6b3e3c2fde55 took: 9.271162ms
Aug 19 00:29:10.992: INFO: Terminating ReplicationController wrapped-volume-race-620c0f06-dd4d-4b95-b9f9-6b3e3c2fde55 pods took: 1.400616746s
STEP: Creating RC which spawns configmap-volume pods
Aug 19 00:29:54.436: INFO: Pod name wrapped-volume-race-a52827fa-9763-4901-bd1d-431bb86366fb: Found 0 pods out of 5
Aug 19 00:29:59.457: INFO: Pod name wrapped-volume-race-a52827fa-9763-4901-bd1d-431bb86366fb: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a52827fa-9763-4901-bd1d-431bb86366fb in namespace emptydir-wrapper-9731, will wait for the garbage collector to delete the pods
Aug 19 00:30:19.615: INFO: Deleting ReplicationController wrapped-volume-race-a52827fa-9763-4901-bd1d-431bb86366fb took: 10.346529ms
Aug 19 00:30:19.916: INFO: Terminating ReplicationController wrapped-volume-race-a52827fa-9763-4901-bd1d-431bb86366fb pods took: 300.881572ms
STEP: Creating RC which spawns configmap-volume pods
Aug 19 00:31:03.486: INFO: Pod name wrapped-volume-race-dc46afa9-849f-4591-bc96-e1944be96755: Found 0 pods out of 5
Aug 19 00:31:08.502: INFO: Pod name wrapped-volume-race-dc46afa9-849f-4591-bc96-e1944be96755: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-dc46afa9-849f-4591-bc96-e1944be96755 in namespace emptydir-wrapper-9731, will wait for the garbage collector to delete the pods
Aug 19 00:31:26.652: INFO: Deleting ReplicationController wrapped-volume-race-dc46afa9-849f-4591-bc96-e1944be96755 took: 57.372096ms
Aug 19 00:31:26.952: INFO: Terminating ReplicationController wrapped-volume-race-dc46afa9-849f-4591-bc96-e1944be96755 pods took: 300.651738ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:32:14.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9731" for this suite.
Aug 19 00:32:28.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:32:28.817: INFO: namespace emptydir-wrapper-9731 deletion completed in 14.149190065s

• [SLOW TEST:221.464 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:32:28.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 19 00:32:35.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6450a730-7368-4b29-b362-b62749b9b286 -c busybox-main-container --namespace=emptydir-75 -- cat /usr/share/volumeshare/shareddata.txt'
Aug 19 00:32:45.240: INFO: stderr: "I0819 00:32:45.118638    1876 log.go:172] (0x4000a80420) (0x4000784960) Create stream\nI0819 00:32:45.122279    1876 log.go:172] (0x4000a80420) (0x4000784960) Stream added, broadcasting: 1\nI0819 00:32:45.137201    1876 log.go:172] (0x4000a80420) Reply frame received for 1\nI0819 00:32:45.138310    1876 log.go:172] (0x4000a80420) (0x400067e280) Create stream\nI0819 00:32:45.138442    1876 log.go:172] (0x4000a80420) (0x400067e280) Stream added, broadcasting: 3\nI0819 00:32:45.140413    1876 log.go:172] (0x4000a80420) Reply frame received for 3\nI0819 00:32:45.140960    1876 log.go:172] (0x4000a80420) (0x4000900000) Create stream\nI0819 00:32:45.141074    1876 log.go:172] (0x4000a80420) (0x4000900000) Stream added, broadcasting: 5\nI0819 00:32:45.143464    1876 log.go:172] (0x4000a80420) Reply frame received for 5\nI0819 00:32:45.217523    1876 log.go:172] (0x4000a80420) Data frame received for 5\nI0819 00:32:45.217768    1876 log.go:172] (0x4000a80420) Data frame received for 1\nI0819 00:32:45.218140    1876 log.go:172] (0x4000a80420) Data frame received for 3\nI0819 00:32:45.218304    1876 log.go:172] (0x400067e280) (3) Data frame handling\nI0819 00:32:45.218426    1876 log.go:172] (0x4000900000) (5) Data frame handling\nI0819 00:32:45.218657    1876 log.go:172] (0x4000784960) (1) Data frame handling\nI0819 00:32:45.222666    1876 log.go:172] (0x4000784960) (1) Data frame sent\nI0819 00:32:45.224193    1876 log.go:172] (0x400067e280) (3) Data frame sent\nI0819 00:32:45.224340    1876 log.go:172] (0x4000a80420) Data frame received for 3\nI0819 00:32:45.224486    1876 log.go:172] (0x4000a80420) (0x4000784960) Stream removed, broadcasting: 1\nI0819 00:32:45.224938    1876 log.go:172] (0x400067e280) (3) Data frame handling\nI0819 00:32:45.225414    1876 log.go:172] (0x4000a80420) Go away received\nI0819 00:32:45.228610    1876 log.go:172] (0x4000a80420) (0x4000784960) Stream removed, broadcasting: 1\nI0819 00:32:45.229007    1876 log.go:172] (0x4000a80420) (0x400067e280) Stream removed, broadcasting: 3\nI0819 00:32:45.229275    1876 log.go:172] (0x4000a80420) (0x4000900000) Stream removed, broadcasting: 5\n"
Aug 19 00:32:45.241: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:32:45.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-75" for this suite.
Aug 19 00:32:51.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:32:51.479: INFO: namespace emptydir-75 deletion completed in 6.22746655s

• [SLOW TEST:22.661 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:32:51.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:32:55.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3768" for this suite.
Aug 19 00:33:01.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:33:01.790: INFO: namespace kubelet-test-3768 deletion completed in 6.171788973s

• [SLOW TEST:10.306 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:33:01.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:33:01.877: INFO: Creating deployment "test-recreate-deployment"
Aug 19 00:33:01.883: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 19 00:33:01.919: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 19 00:33:03.981: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 19 00:33:03.990: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733393981, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733393981, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733393981, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733393981, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 00:33:05.997: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 19 00:33:06.012: INFO: Updating deployment test-recreate-deployment
Aug 19 00:33:06.012: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 00:33:06.710: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8049,SelfLink:/apis/apps/v1/namespaces/deployment-8049/deployments/test-recreate-deployment,UID:93eac2d3-1a7a-45c0-8893-b2044c752b70,ResourceVersion:934669,Generation:2,CreationTimestamp:2020-08-19 00:33:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-19 00:33:06 +0000 UTC 2020-08-19 00:33:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-19 00:33:06 +0000 UTC 2020-08-19 00:33:01 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Aug 19 00:33:06.781: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8049,SelfLink:/apis/apps/v1/namespaces/deployment-8049/replicasets/test-recreate-deployment-5c8c9cc69d,UID:905df2e1-4e02-43d6-937f-32e78c0094f2,ResourceVersion:934666,Generation:1,CreationTimestamp:2020-08-19 00:33:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 93eac2d3-1a7a-45c0-8893-b2044c752b70 0x4002fdc6e7 0x4002fdc6e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 00:33:06.781: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 19 00:33:06.782: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8049,SelfLink:/apis/apps/v1/namespaces/deployment-8049/replicasets/test-recreate-deployment-6df85df6b9,UID:1255bdab-d788-48eb-9202-d2eb256c9d3b,ResourceVersion:934658,Generation:2,CreationTimestamp:2020-08-19 00:33:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 93eac2d3-1a7a-45c0-8893-b2044c752b70 0x4002fdc7b7 0x4002fdc7b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 00:33:06.791: INFO: Pod "test-recreate-deployment-5c8c9cc69d-ms2fk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-ms2fk,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8049,SelfLink:/api/v1/namespaces/deployment-8049/pods/test-recreate-deployment-5c8c9cc69d-ms2fk,UID:0086f546-22c3-4096-84d4-e36edcbecdc0,ResourceVersion:934671,Generation:0,CreationTimestamp:2020-08-19 00:33:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 905df2e1-4e02-43d6-937f-32e78c0094f2 0x4002fdd0d7 0x4002fdd0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fk5x6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fk5x6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-fk5x6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002fdd150} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002fdd170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:33:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:33:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:33:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:33:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 00:33:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:33:06.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8049" for this suite.
Aug 19 00:33:12.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:33:12.991: INFO: namespace deployment-8049 deletion completed in 6.189626107s

• [SLOW TEST:11.198 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:33:12.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Aug 19 00:33:13.159: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3670" to be "success or failure"
Aug 19 00:33:13.164: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.803319ms
Aug 19 00:33:15.170: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011310563s
Aug 19 00:33:17.334: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174921376s
Aug 19 00:33:19.340: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181384061s
STEP: Saw pod success
Aug 19 00:33:19.340: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 19 00:33:19.344: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 19 00:33:19.372: INFO: Waiting for pod pod-host-path-test to disappear
Aug 19 00:33:19.376: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:33:19.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3670" for this suite.
Aug 19 00:33:25.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:33:25.585: INFO: namespace hostpath-3670 deletion completed in 6.202078542s

• [SLOW TEST:12.593 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:33:25.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 19 00:33:25.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:25.785: INFO: Number of nodes with available pods: 0
Aug 19 00:33:25.785: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:33:27.279: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:27.285: INFO: Number of nodes with available pods: 0
Aug 19 00:33:27.285: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:33:28.102: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:28.108: INFO: Number of nodes with available pods: 0
Aug 19 00:33:28.108: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:33:28.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:29.065: INFO: Number of nodes with available pods: 0
Aug 19 00:33:29.065: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:33:29.965: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:29.995: INFO: Number of nodes with available pods: 0
Aug 19 00:33:29.995: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:33:30.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:30.934: INFO: Number of nodes with available pods: 0
Aug 19 00:33:30.934: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:33:31.797: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:31.804: INFO: Number of nodes with available pods: 1
Aug 19 00:33:31.804: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:32.799: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:32.805: INFO: Number of nodes with available pods: 2
Aug 19 00:33:32.805: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 19 00:33:32.923: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:32.930: INFO: Number of nodes with available pods: 1
Aug 19 00:33:32.930: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:33.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:33.949: INFO: Number of nodes with available pods: 1
Aug 19 00:33:33.949: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:34.940: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:34.946: INFO: Number of nodes with available pods: 1
Aug 19 00:33:34.946: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:35.940: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:35.945: INFO: Number of nodes with available pods: 1
Aug 19 00:33:35.945: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:36.941: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:36.947: INFO: Number of nodes with available pods: 1
Aug 19 00:33:36.947: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:37.942: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:37.947: INFO: Number of nodes with available pods: 1
Aug 19 00:33:37.947: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:38.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:38.944: INFO: Number of nodes with available pods: 1
Aug 19 00:33:38.944: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:39.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:39.949: INFO: Number of nodes with available pods: 1
Aug 19 00:33:39.949: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:40.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:40.958: INFO: Number of nodes with available pods: 1
Aug 19 00:33:40.958: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:41.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:41.950: INFO: Number of nodes with available pods: 1
Aug 19 00:33:41.950: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:42.940: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:42.945: INFO: Number of nodes with available pods: 1
Aug 19 00:33:42.945: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:43.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:43.952: INFO: Number of nodes with available pods: 1
Aug 19 00:33:43.952: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:44.941: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:44.949: INFO: Number of nodes with available pods: 1
Aug 19 00:33:44.949: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:45.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:45.948: INFO: Number of nodes with available pods: 1
Aug 19 00:33:45.948: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:46.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:46.949: INFO: Number of nodes with available pods: 1
Aug 19 00:33:46.949: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:33:47.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:33:47.949: INFO: Number of nodes with available pods: 2
Aug 19 00:33:47.949: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2341, will wait for the garbage collector to delete the pods
Aug 19 00:33:48.016: INFO: Deleting DaemonSet.extensions daemon-set took: 7.575292ms
Aug 19 00:33:48.317: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.751773ms
Aug 19 00:33:54.123: INFO: Number of nodes with available pods: 0
Aug 19 00:33:54.123: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 00:33:54.127: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2341/daemonsets","resourceVersion":"934873"},"items":null}

Aug 19 00:33:54.131: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2341/pods","resourceVersion":"934873"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:33:54.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2341" for this suite.
Aug 19 00:34:02.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:34:02.478: INFO: namespace daemonsets-2341 deletion completed in 8.321320625s

• [SLOW TEST:36.891 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:34:02.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:34:02.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 19 00:34:03.777: INFO: stderr: ""
Aug 19 00:34:03.777: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:34:03.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6970" for this suite.
Aug 19 00:34:09.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:34:09.958: INFO: namespace kubectl-6970 deletion completed in 6.169633251s

• [SLOW TEST:7.477 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:34:09.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 19 00:34:10.045: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug 19 00:34:13.509: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 19 00:34:15.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 00:34:17.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394053, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 00:34:20.634: INFO: Waited 631.805088ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:34:21.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7439" for this suite.
Aug 19 00:34:31.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:34:31.759: INFO: namespace aggregator-7439 deletion completed in 10.377127497s

• [SLOW TEST:21.800 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:34:31.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-13518bce-cea3-4f48-9841-cabe7f562e41
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:34:32.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1289" for this suite.
Aug 19 00:34:38.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:34:38.703: INFO: namespace secrets-1289 deletion completed in 6.136628346s

• [SLOW TEST:6.943 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:34:38.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 19 00:34:39.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1857'
Aug 19 00:34:40.841: INFO: stderr: ""
Aug 19 00:34:40.841: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 19 00:34:41.852: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:34:41.853: INFO: Found 0 / 1
Aug 19 00:34:42.851: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:34:42.851: INFO: Found 0 / 1
Aug 19 00:34:43.849: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:34:43.849: INFO: Found 0 / 1
Aug 19 00:34:44.851: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:34:44.851: INFO: Found 0 / 1
Aug 19 00:34:45.987: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:34:45.988: INFO: Found 1 / 1
Aug 19 00:34:45.988: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 19 00:34:45.993: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:34:45.994: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 19 00:34:45.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-z9765 --namespace=kubectl-1857 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 19 00:34:47.236: INFO: stderr: ""
Aug 19 00:34:47.236: INFO: stdout: "pod/redis-master-z9765 patched\n"
STEP: checking annotations
Aug 19 00:34:47.330: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:34:47.330: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:34:47.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1857" for this suite.
Aug 19 00:35:09.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:35:09.548: INFO: namespace kubectl-1857 deletion completed in 22.207312425s

• [SLOW TEST:30.842 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:35:09.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-fe8628ad-6b2a-42bb-add4-c993b994f767
STEP: Creating a pod to test consume configMaps
Aug 19 00:35:09.706: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6" in namespace "projected-3604" to be "success or failure"
Aug 19 00:35:09.715: INFO: Pod "pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.840524ms
Aug 19 00:35:11.723: INFO: Pod "pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016742105s
Aug 19 00:35:13.808: INFO: Pod "pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101837098s
STEP: Saw pod success
Aug 19 00:35:13.808: INFO: Pod "pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6" satisfied condition "success or failure"
Aug 19 00:35:13.862: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 00:35:13.960: INFO: Waiting for pod pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6 to disappear
Aug 19 00:35:13.975: INFO: Pod pod-projected-configmaps-16a03c4a-0dd3-47fe-adc6-54650f3d90e6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:35:13.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3604" for this suite.
Aug 19 00:35:20.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:35:20.187: INFO: namespace projected-3604 deletion completed in 6.185479083s

• [SLOW TEST:10.636 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:35:20.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:35:20.231: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 19 00:35:20.256: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 19 00:35:25.262: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 19 00:35:25.263: INFO: Creating deployment "test-rolling-update-deployment"
Aug 19 00:35:25.269: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 19 00:35:25.281: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 19 00:35:27.295: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 19 00:35:27.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394125, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394125, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394125, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733394125, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 00:35:29.324: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 00:35:29.338: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4286,SelfLink:/apis/apps/v1/namespaces/deployment-4286/deployments/test-rolling-update-deployment,UID:83b5da12-ca94-49df-9dac-e17a8722771c,ResourceVersion:935276,Generation:1,CreationTimestamp:2020-08-19 00:35:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-19 00:35:25 +0000 UTC 2020-08-19 00:35:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-19 00:35:28 +0000 UTC 2020-08-19 00:35:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 19 00:35:29.344: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4286,SelfLink:/apis/apps/v1/namespaces/deployment-4286/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:1406d74e-d5e5-4b42-bdb0-f9a637ee313b,ResourceVersion:935265,Generation:1,CreationTimestamp:2020-08-19 00:35:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 83b5da12-ca94-49df-9dac-e17a8722771c 0x40037cc737 0x40037cc738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 19 00:35:29.344: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 19 00:35:29.345: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4286,SelfLink:/apis/apps/v1/namespaces/deployment-4286/replicasets/test-rolling-update-controller,UID:55f03a52-0de0-4335-b983-b986f663654b,ResourceVersion:935274,Generation:2,CreationTimestamp:2020-08-19 00:35:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 83b5da12-ca94-49df-9dac-e17a8722771c 0x40037cc657 0x40037cc658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 00:35:29.350: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-smrwl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-smrwl,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4286,SelfLink:/api/v1/namespaces/deployment-4286/pods/test-rolling-update-deployment-79f6b9d75c-smrwl,UID:a95374a7-4deb-48ac-ab61-7b71de7bef9f,ResourceVersion:935264,Generation:0,CreationTimestamp:2020-08-19 00:35:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 1406d74e-d5e5-4b42-bdb0-f9a637ee313b 0x40037cd027 0x40037cd028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gmr4g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gmr4g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-gmr4g true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40037cd0a0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40037cd0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:35:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:35:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:35:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:35:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.137,StartTime:2020-08-19 00:35:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-19 00:35:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3b63ca0b1d57bfb74540f5139562dd6895de79344543974b4d330f48d7b6b48c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:35:29.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4286" for this suite.
Aug 19 00:35:35.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:35:35.509: INFO: namespace deployment-4286 deletion completed in 6.151901162s

• [SLOW TEST:15.321 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:35:35.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 19 00:35:39.634: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:35:39.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1107" for this suite.
Aug 19 00:35:45.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:35:45.876: INFO: namespace container-runtime-1107 deletion completed in 6.161207471s

• [SLOW TEST:10.366 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:35:45.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:35:51.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1149" for this suite.
Aug 19 00:35:57.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:35:57.677: INFO: namespace watch-1149 deletion completed in 6.277202953s

• [SLOW TEST:11.799 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:35:57.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-835f5878-0346-487e-8ad0-0751a2a86ea1
STEP: Creating a pod to test consume secrets
Aug 19 00:35:58.057: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2" in namespace "projected-1144" to be "success or failure"
Aug 19 00:35:58.085: INFO: Pod "pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.360306ms
Aug 19 00:36:00.091: INFO: Pod "pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033165658s
Aug 19 00:36:02.098: INFO: Pod "pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039925256s
Aug 19 00:36:04.104: INFO: Pod "pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046345102s
STEP: Saw pod success
Aug 19 00:36:04.104: INFO: Pod "pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2" satisfied condition "success or failure"
Aug 19 00:36:04.123: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2 container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 00:36:04.154: INFO: Waiting for pod pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2 to disappear
Aug 19 00:36:04.179: INFO: Pod pod-projected-secrets-2ac94b60-38bb-4a2b-a113-ccc94c0f08b2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:36:04.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1144" for this suite.
Aug 19 00:36:10.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:36:10.340: INFO: namespace projected-1144 deletion completed in 6.151237696s

• [SLOW TEST:12.662 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:36:10.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-50f80721-2e1b-4fe4-9bb0-64d3a7f2d7d7
STEP: Creating a pod to test consume configMaps
Aug 19 00:36:10.458: INFO: Waiting up to 5m0s for pod "pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e" in namespace "configmap-5097" to be "success or failure"
Aug 19 00:36:10.481: INFO: Pod "pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.949848ms
Aug 19 00:36:12.489: INFO: Pod "pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030033684s
Aug 19 00:36:14.517: INFO: Pod "pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058012551s
STEP: Saw pod success
Aug 19 00:36:14.517: INFO: Pod "pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e" satisfied condition "success or failure"
Aug 19 00:36:14.539: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e container configmap-volume-test: 
STEP: delete the pod
Aug 19 00:36:14.616: INFO: Waiting for pod pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e to disappear
Aug 19 00:36:14.619: INFO: Pod pod-configmaps-e8e673fe-73c1-4f02-8daa-06c1bcb73d6e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:36:14.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5097" for this suite.
Aug 19 00:36:20.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:36:20.947: INFO: namespace configmap-5097 deletion completed in 6.321462912s

• [SLOW TEST:10.606 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:36:20.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 19 00:36:21.047: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 19 00:36:21.064: INFO: Waiting for terminating namespaces to be deleted...
Aug 19 00:36:21.071: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 19 00:36:21.081: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 00:36:21.082: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 00:36:21.082: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 00:36:21.082: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 00:36:21.082: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 19 00:36:21.090: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 00:36:21.090: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 00:36:21.090: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 00:36:21.090: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162c8440ad6740d3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:36:22.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3518" for this suite.
Aug 19 00:36:28.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:36:28.318: INFO: namespace sched-pred-3518 deletion completed in 6.16238547s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.368 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:36:28.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-fp7h
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 00:36:28.430: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fp7h" in namespace "subpath-7211" to be "success or failure"
Aug 19 00:36:28.493: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Pending", Reason="", readiness=false. Elapsed: 62.745124ms
Aug 19 00:36:30.858: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427929123s
Aug 19 00:36:32.865: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 4.435206957s
Aug 19 00:36:34.872: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 6.442395817s
Aug 19 00:36:36.879: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 8.449503896s
Aug 19 00:36:38.887: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 10.456811903s
Aug 19 00:36:40.894: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 12.464347404s
Aug 19 00:36:42.902: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 14.472440638s
Aug 19 00:36:44.910: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 16.479760249s
Aug 19 00:36:46.917: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 18.486784597s
Aug 19 00:36:48.924: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 20.493934689s
Aug 19 00:36:50.932: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Running", Reason="", readiness=true. Elapsed: 22.50183407s
Aug 19 00:36:52.938: INFO: Pod "pod-subpath-test-projected-fp7h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.508688163s
STEP: Saw pod success
Aug 19 00:36:52.939: INFO: Pod "pod-subpath-test-projected-fp7h" satisfied condition "success or failure"
Aug 19 00:36:52.945: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-fp7h container test-container-subpath-projected-fp7h: 
STEP: delete the pod
Aug 19 00:36:53.017: INFO: Waiting for pod pod-subpath-test-projected-fp7h to disappear
Aug 19 00:36:53.021: INFO: Pod pod-subpath-test-projected-fp7h no longer exists
STEP: Deleting pod pod-subpath-test-projected-fp7h
Aug 19 00:36:53.021: INFO: Deleting pod "pod-subpath-test-projected-fp7h" in namespace "subpath-7211"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:36:53.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7211" for this suite.
Aug 19 00:36:59.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:36:59.244: INFO: namespace subpath-7211 deletion completed in 6.210991949s

• [SLOW TEST:30.925 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:36:59.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2707
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 19 00:36:59.363: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 19 00:37:27.562: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.38:8080/dial?request=hostName&protocol=udp&host=10.244.2.141&port=8081&tries=1'] Namespace:pod-network-test-2707 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:37:27.563: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:37:27.634865       7 log.go:172] (0x40033124d0) (0x40038fd400) Create stream
I0819 00:37:27.635022       7 log.go:172] (0x40033124d0) (0x40038fd400) Stream added, broadcasting: 1
I0819 00:37:27.660161       7 log.go:172] (0x40033124d0) Reply frame received for 1
I0819 00:37:27.660358       7 log.go:172] (0x40033124d0) (0x4002212000) Create stream
I0819 00:37:27.660432       7 log.go:172] (0x40033124d0) (0x4002212000) Stream added, broadcasting: 3
I0819 00:37:27.661811       7 log.go:172] (0x40033124d0) Reply frame received for 3
I0819 00:37:27.661932       7 log.go:172] (0x40033124d0) (0x40034f8000) Create stream
I0819 00:37:27.661998       7 log.go:172] (0x40033124d0) (0x40034f8000) Stream added, broadcasting: 5
I0819 00:37:27.663001       7 log.go:172] (0x40033124d0) Reply frame received for 5
I0819 00:37:27.726522       7 log.go:172] (0x40033124d0) Data frame received for 5
I0819 00:37:27.726679       7 log.go:172] (0x40034f8000) (5) Data frame handling
I0819 00:37:27.726851       7 log.go:172] (0x40033124d0) Data frame received for 3
I0819 00:37:27.726994       7 log.go:172] (0x4002212000) (3) Data frame handling
I0819 00:37:27.727145       7 log.go:172] (0x4002212000) (3) Data frame sent
I0819 00:37:27.727258       7 log.go:172] (0x40033124d0) Data frame received for 3
I0819 00:37:27.727352       7 log.go:172] (0x4002212000) (3) Data frame handling
I0819 00:37:27.728685       7 log.go:172] (0x40033124d0) Data frame received for 1
I0819 00:37:27.729057       7 log.go:172] (0x40038fd400) (1) Data frame handling
I0819 00:37:27.729240       7 log.go:172] (0x40038fd400) (1) Data frame sent
I0819 00:37:27.729489       7 log.go:172] (0x40033124d0) (0x40038fd400) Stream removed, broadcasting: 1
I0819 00:37:27.729740       7 log.go:172] (0x40033124d0) Go away received
I0819 00:37:27.730778       7 log.go:172] (0x40033124d0) (0x40038fd400) Stream removed, broadcasting: 1
I0819 00:37:27.730930       7 log.go:172] (0x40033124d0) (0x4002212000) Stream removed, broadcasting: 3
I0819 00:37:27.731075       7 log.go:172] (0x40033124d0) (0x40034f8000) Stream removed, broadcasting: 5
Aug 19 00:37:27.732: INFO: Waiting for endpoints: map[]
Aug 19 00:37:27.738: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.38:8080/dial?request=hostName&protocol=udp&host=10.244.1.37&port=8081&tries=1'] Namespace:pod-network-test-2707 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:37:27.738: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:37:27.807657       7 log.go:172] (0x400075b6b0) (0x4002fec3c0) Create stream
I0819 00:37:27.807785       7 log.go:172] (0x400075b6b0) (0x4002fec3c0) Stream added, broadcasting: 1
I0819 00:37:27.813987       7 log.go:172] (0x400075b6b0) Reply frame received for 1
I0819 00:37:27.814298       7 log.go:172] (0x400075b6b0) (0x40010b4140) Create stream
I0819 00:37:27.814434       7 log.go:172] (0x400075b6b0) (0x40010b4140) Stream added, broadcasting: 3
I0819 00:37:27.817781       7 log.go:172] (0x400075b6b0) Reply frame received for 3
I0819 00:37:27.817978       7 log.go:172] (0x400075b6b0) (0x40034f80a0) Create stream
I0819 00:37:27.818078       7 log.go:172] (0x400075b6b0) (0x40034f80a0) Stream added, broadcasting: 5
I0819 00:37:27.822000       7 log.go:172] (0x400075b6b0) Reply frame received for 5
I0819 00:37:27.881152       7 log.go:172] (0x400075b6b0) Data frame received for 3
I0819 00:37:27.881390       7 log.go:172] (0x40010b4140) (3) Data frame handling
I0819 00:37:27.881497       7 log.go:172] (0x400075b6b0) Data frame received for 5
I0819 00:37:27.881614       7 log.go:172] (0x40034f80a0) (5) Data frame handling
I0819 00:37:27.881775       7 log.go:172] (0x40010b4140) (3) Data frame sent
I0819 00:37:27.881960       7 log.go:172] (0x400075b6b0) Data frame received for 3
I0819 00:37:27.882030       7 log.go:172] (0x40010b4140) (3) Data frame handling
I0819 00:37:27.883342       7 log.go:172] (0x400075b6b0) Data frame received for 1
I0819 00:37:27.883417       7 log.go:172] (0x4002fec3c0) (1) Data frame handling
I0819 00:37:27.883500       7 log.go:172] (0x4002fec3c0) (1) Data frame sent
I0819 00:37:27.883586       7 log.go:172] (0x400075b6b0) (0x4002fec3c0) Stream removed, broadcasting: 1
I0819 00:37:27.883683       7 log.go:172] (0x400075b6b0) Go away received
I0819 00:37:27.884340       7 log.go:172] (0x400075b6b0) (0x4002fec3c0) Stream removed, broadcasting: 1
I0819 00:37:27.884549       7 log.go:172] (0x400075b6b0) (0x40010b4140) Stream removed, broadcasting: 3
I0819 00:37:27.884659       7 log.go:172] (0x400075b6b0) (0x40034f80a0) Stream removed, broadcasting: 5
Aug 19 00:37:27.885: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:37:27.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2707" for this suite.
Aug 19 00:37:49.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:37:50.016: INFO: namespace pod-network-test-2707 deletion completed in 22.121124573s

• [SLOW TEST:50.769 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:37:50.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 19 00:37:54.258: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 19 00:38:00.529: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:38:00.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1197" for this suite.
Aug 19 00:38:06.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:38:06.741: INFO: namespace pods-1197 deletion completed in 6.197910682s

• [SLOW TEST:16.724 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:38:06.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-bfa5ee48-95c6-4737-ab26-277455751a97
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-bfa5ee48-95c6-4737-ab26-277455751a97
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:39:30.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2114" for this suite.
Aug 19 00:39:54.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:39:54.913: INFO: namespace configmap-2114 deletion completed in 24.18921433s

• [SLOW TEST:108.171 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:39:54.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f8574fcf-eae1-4926-ada2-043b604954c9
STEP: Creating a pod to test consume configMaps
Aug 19 00:39:55.041: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e" in namespace "projected-2024" to be "success or failure"
Aug 19 00:39:55.062: INFO: Pod "pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.494583ms
Aug 19 00:39:57.069: INFO: Pod "pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02761312s
Aug 19 00:39:59.208: INFO: Pod "pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166241705s
STEP: Saw pod success
Aug 19 00:39:59.208: INFO: Pod "pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e" satisfied condition "success or failure"
Aug 19 00:39:59.222: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 00:39:59.283: INFO: Waiting for pod pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e to disappear
Aug 19 00:39:59.299: INFO: Pod pod-projected-configmaps-72cc00a5-830e-4c43-a706-42a26f694e7e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:39:59.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2024" for this suite.
Aug 19 00:40:05.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:40:05.520: INFO: namespace projected-2024 deletion completed in 6.212877416s

• [SLOW TEST:10.605 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:40:05.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:40:13.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4435" for this suite.
Aug 19 00:40:19.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:40:19.909: INFO: namespace namespaces-4435 deletion completed in 6.161222088s
STEP: Destroying namespace "nsdeletetest-382" for this suite.
Aug 19 00:40:19.913: INFO: Namespace nsdeletetest-382 was already deleted
STEP: Destroying namespace "nsdeletetest-6576" for this suite.
Aug 19 00:40:26.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:40:26.146: INFO: namespace nsdeletetest-6576 deletion completed in 6.232729168s

• [SLOW TEST:20.624 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:40:26.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:40:54.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1922" for this suite.
Aug 19 00:41:00.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:41:01.060: INFO: namespace namespaces-1922 deletion completed in 6.169361004s
STEP: Destroying namespace "nsdeletetest-7243" for this suite.
Aug 19 00:41:01.064: INFO: Namespace nsdeletetest-7243 was already deleted
STEP: Destroying namespace "nsdeletetest-1418" for this suite.
Aug 19 00:41:07.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:41:07.499: INFO: namespace nsdeletetest-1418 deletion completed in 6.434713656s

• [SLOW TEST:41.353 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:41:07.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 00:41:07.714: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:41:16.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5096" for this suite.
Aug 19 00:41:22.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:41:22.991: INFO: namespace init-container-5096 deletion completed in 6.206772845s

• [SLOW TEST:15.489 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:41:22.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 19 00:41:23.248: INFO: Waiting up to 5m0s for pod "pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d" in namespace "emptydir-1845" to be "success or failure"
Aug 19 00:41:23.253: INFO: Pod "pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.558304ms
Aug 19 00:41:25.290: INFO: Pod "pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042025893s
Aug 19 00:41:27.296: INFO: Pod "pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d": Phase="Running", Reason="", readiness=true. Elapsed: 4.047834315s
Aug 19 00:41:29.309: INFO: Pod "pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060871302s
STEP: Saw pod success
Aug 19 00:41:29.309: INFO: Pod "pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d" satisfied condition "success or failure"
Aug 19 00:41:29.319: INFO: Trying to get logs from node iruya-worker2 pod pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d container test-container: 
STEP: delete the pod
Aug 19 00:41:29.411: INFO: Waiting for pod pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d to disappear
Aug 19 00:41:29.421: INFO: Pod pod-6f0ddfe8-ed0e-438a-b76a-dc90d3a2cf0d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:41:29.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1845" for this suite.
Aug 19 00:41:35.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:41:35.604: INFO: namespace emptydir-1845 deletion completed in 6.174662927s

• [SLOW TEST:12.606 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:41:35.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-7638676d-93d4-4f76-bbc6-7f7811fc2103 in namespace container-probe-6326
Aug 19 00:41:41.515: INFO: Started pod busybox-7638676d-93d4-4f76-bbc6-7f7811fc2103 in namespace container-probe-6326
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 00:41:41.521: INFO: Initial restart count of pod busybox-7638676d-93d4-4f76-bbc6-7f7811fc2103 is 0
Aug 19 00:42:31.803: INFO: Restart count of pod container-probe-6326/busybox-7638676d-93d4-4f76-bbc6-7f7811fc2103 is now 1 (50.281828808s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:42:31.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6326" for this suite.
Aug 19 00:42:37.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:42:37.992: INFO: namespace container-probe-6326 deletion completed in 6.153691797s

• [SLOW TEST:62.384 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:42:37.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Aug 19 00:42:38.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4789'
Aug 19 00:42:40.019: INFO: stderr: ""
Aug 19 00:42:40.019: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Aug 19 00:42:41.027: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:42:41.027: INFO: Found 0 / 1
Aug 19 00:42:42.037: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:42:42.038: INFO: Found 0 / 1
Aug 19 00:42:43.027: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:42:43.027: INFO: Found 0 / 1
Aug 19 00:42:44.029: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:42:44.029: INFO: Found 1 / 1
Aug 19 00:42:44.029: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 19 00:42:44.034: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 00:42:44.034: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 19 00:42:44.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pdw28 redis-master --namespace=kubectl-4789'
Aug 19 00:42:53.257: INFO: stderr: ""
Aug 19 00:42:53.257: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Aug 00:42:42.834 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Aug 00:42:42.834 # Server started, Redis version 3.2.12\n1:M 19 Aug 00:42:42.834 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Aug 00:42:42.834 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 19 00:42:53.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pdw28 redis-master --namespace=kubectl-4789 --tail=1'
Aug 19 00:42:54.565: INFO: stderr: ""
Aug 19 00:42:54.565: INFO: stdout: "1:M 19 Aug 00:42:42.834 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 19 00:42:54.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pdw28 redis-master --namespace=kubectl-4789 --limit-bytes=1'
Aug 19 00:42:55.846: INFO: stderr: ""
Aug 19 00:42:55.847: INFO: stdout: " "
STEP: exposing timestamps
Aug 19 00:42:55.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pdw28 redis-master --namespace=kubectl-4789 --tail=1 --timestamps'
Aug 19 00:42:57.232: INFO: stderr: ""
Aug 19 00:42:57.232: INFO: stdout: "2020-08-19T00:42:42.834519748Z 1:M 19 Aug 00:42:42.834 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 19 00:42:59.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pdw28 redis-master --namespace=kubectl-4789 --since=1s'
Aug 19 00:43:01.050: INFO: stderr: ""
Aug 19 00:43:01.050: INFO: stdout: ""
Aug 19 00:43:01.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pdw28 redis-master --namespace=kubectl-4789 --since=24h'
Aug 19 00:43:02.763: INFO: stderr: ""
Aug 19 00:43:02.763: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Aug 00:42:42.834 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Aug 00:42:42.834 # Server started, Redis version 3.2.12\n1:M 19 Aug 00:42:42.834 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Aug 00:42:42.834 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Aug 19 00:43:02.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4789'
Aug 19 00:43:04.013: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 00:43:04.013: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 19 00:43:04.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4789'
Aug 19 00:43:05.299: INFO: stderr: "No resources found.\n"
Aug 19 00:43:05.299: INFO: stdout: ""
Aug 19 00:43:05.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4789 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 00:43:06.600: INFO: stderr: ""
Aug 19 00:43:06.600: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:43:06.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4789" for this suite.
Aug 19 00:43:30.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:43:30.756: INFO: namespace kubectl-4789 deletion completed in 24.146257079s

• [SLOW TEST:52.763 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:43:30.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:43:31.477: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd" in namespace "downward-api-3749" to be "success or failure"
Aug 19 00:43:31.829: INFO: Pod "downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd": Phase="Pending", Reason="", readiness=false. Elapsed: 351.713972ms
Aug 19 00:43:33.834: INFO: Pod "downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356575809s
Aug 19 00:43:35.839: INFO: Pod "downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362357087s
Aug 19 00:43:38.128: INFO: Pod "downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.650936791s
Aug 19 00:43:40.367: INFO: Pod "downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.89001102s
STEP: Saw pod success
Aug 19 00:43:40.367: INFO: Pod "downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd" satisfied condition "success or failure"
Aug 19 00:43:40.576: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd container client-container: 
STEP: delete the pod
Aug 19 00:43:40.949: INFO: Waiting for pod downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd to disappear
Aug 19 00:43:40.993: INFO: Pod downwardapi-volume-52bbc378-a7c5-40f2-815d-c124cbd591fd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:43:40.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3749" for this suite.
Aug 19 00:43:47.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:43:48.794: INFO: namespace downward-api-3749 deletion completed in 7.792901123s

• [SLOW TEST:18.035 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:43:48.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0819 00:44:01.317824       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 00:44:01.318: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:44:01.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8972" for this suite.
Aug 19 00:44:07.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:44:08.148: INFO: namespace gc-8972 deletion completed in 6.820988799s

• [SLOW TEST:19.347 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:44:08.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 00:44:08.673: INFO: Waiting up to 5m0s for pod "downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699" in namespace "downward-api-8770" to be "success or failure"
Aug 19 00:44:08.744: INFO: Pod "downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699": Phase="Pending", Reason="", readiness=false. Elapsed: 71.200801ms
Aug 19 00:44:10.999: INFO: Pod "downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326728979s
Aug 19 00:44:13.368: INFO: Pod "downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699": Phase="Pending", Reason="", readiness=false. Elapsed: 4.695303572s
Aug 19 00:44:15.415: INFO: Pod "downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699": Phase="Pending", Reason="", readiness=false. Elapsed: 6.742362408s
Aug 19 00:44:17.794: INFO: Pod "downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.120826487s
STEP: Saw pod success
Aug 19 00:44:17.794: INFO: Pod "downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699" satisfied condition "success or failure"
Aug 19 00:44:17.815: INFO: Trying to get logs from node iruya-worker2 pod downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699 container dapi-container: 
STEP: delete the pod
Aug 19 00:44:17.936: INFO: Waiting for pod downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699 to disappear
Aug 19 00:44:17.961: INFO: Pod downward-api-56f096bd-14d5-40ec-af12-6a8f0fde9699 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:44:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8770" for this suite.
Aug 19 00:44:26.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:44:26.548: INFO: namespace downward-api-8770 deletion completed in 8.580413432s

• [SLOW TEST:18.397 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:44:26.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-40248509-6f7b-45a3-ac87-b05cc71339ad
STEP: Creating a pod to test consume configMaps
Aug 19 00:44:27.053: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202" in namespace "projected-3113" to be "success or failure"
Aug 19 00:44:27.060: INFO: Pod "pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08438ms
Aug 19 00:44:29.066: INFO: Pod "pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012586615s
Aug 19 00:44:31.072: INFO: Pod "pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018719055s
Aug 19 00:44:33.321: INFO: Pod "pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267387962s
Aug 19 00:44:35.327: INFO: Pod "pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.273483691s
STEP: Saw pod success
Aug 19 00:44:35.327: INFO: Pod "pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202" satisfied condition "success or failure"
Aug 19 00:44:35.332: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 00:44:35.458: INFO: Waiting for pod pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202 to disappear
Aug 19 00:44:35.527: INFO: Pod pod-projected-configmaps-14d9b44c-a06b-400a-b987-f860909c8202 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:44:35.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3113" for this suite.
Aug 19 00:44:43.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:44:43.816: INFO: namespace projected-3113 deletion completed in 8.280214441s

• [SLOW TEST:17.268 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:44:43.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:44:45.454: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"484b803b-1d89-4c21-8692-200504c97e7b", Controller:(*bool)(0x40028508ba), BlockOwnerDeletion:(*bool)(0x40028508bb)}}
Aug 19 00:44:45.525: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"cafe7c62-eb0d-47e0-8bad-7dec04e1d80d", Controller:(*bool)(0x4002f3e2a2), BlockOwnerDeletion:(*bool)(0x4002f3e2a3)}}
Aug 19 00:44:45.650: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7c679144-61b6-451c-a684-3a58621afac0", Controller:(*bool)(0x4002850a8a), BlockOwnerDeletion:(*bool)(0x4002850a8b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:44:50.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6660" for this suite.
Aug 19 00:44:56.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:44:57.029: INFO: namespace gc-6660 deletion completed in 6.154005015s

• [SLOW TEST:13.212 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:44:57.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8834
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 19 00:44:57.136: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 19 00:45:19.358: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.46:8080/dial?request=hostName&protocol=http&host=10.244.1.45&port=8080&tries=1'] Namespace:pod-network-test-8834 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:45:19.358: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:45:19.480836       7 log.go:172] (0x40025133f0) (0x4002451900) Create stream
I0819 00:45:19.481060       7 log.go:172] (0x40025133f0) (0x4002451900) Stream added, broadcasting: 1
I0819 00:45:19.486559       7 log.go:172] (0x40025133f0) Reply frame received for 1
I0819 00:45:19.486797       7 log.go:172] (0x40025133f0) (0x40034f8000) Create stream
I0819 00:45:19.486947       7 log.go:172] (0x40025133f0) (0x40034f8000) Stream added, broadcasting: 3
I0819 00:45:19.489174       7 log.go:172] (0x40025133f0) Reply frame received for 3
I0819 00:45:19.489353       7 log.go:172] (0x40025133f0) (0x40016e2320) Create stream
I0819 00:45:19.489414       7 log.go:172] (0x40025133f0) (0x40016e2320) Stream added, broadcasting: 5
I0819 00:45:19.491132       7 log.go:172] (0x40025133f0) Reply frame received for 5
I0819 00:45:19.552139       7 log.go:172] (0x40025133f0) Data frame received for 3
I0819 00:45:19.552335       7 log.go:172] (0x40034f8000) (3) Data frame handling
I0819 00:45:19.552422       7 log.go:172] (0x40025133f0) Data frame received for 5
I0819 00:45:19.552521       7 log.go:172] (0x40016e2320) (5) Data frame handling
I0819 00:45:19.552595       7 log.go:172] (0x40034f8000) (3) Data frame sent
I0819 00:45:19.552696       7 log.go:172] (0x40025133f0) Data frame received for 3
I0819 00:45:19.552862       7 log.go:172] (0x40034f8000) (3) Data frame handling
I0819 00:45:19.553963       7 log.go:172] (0x40025133f0) Data frame received for 1
I0819 00:45:19.554047       7 log.go:172] (0x4002451900) (1) Data frame handling
I0819 00:45:19.554109       7 log.go:172] (0x4002451900) (1) Data frame sent
I0819 00:45:19.554255       7 log.go:172] (0x40025133f0) (0x4002451900) Stream removed, broadcasting: 1
I0819 00:45:19.554343       7 log.go:172] (0x40025133f0) Go away received
I0819 00:45:19.554777       7 log.go:172] (0x40025133f0) (0x4002451900) Stream removed, broadcasting: 1
I0819 00:45:19.554909       7 log.go:172] (0x40025133f0) (0x40034f8000) Stream removed, broadcasting: 3
I0819 00:45:19.554997       7 log.go:172] (0x40025133f0) (0x40016e2320) Stream removed, broadcasting: 5
Aug 19 00:45:19.555: INFO: Waiting for endpoints: map[]
Aug 19 00:45:19.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.46:8080/dial?request=hostName&protocol=http&host=10.244.2.157&port=8080&tries=1'] Namespace:pod-network-test-8834 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:45:19.560: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:45:19.614573       7 log.go:172] (0x4002f40fd0) (0x40034f81e0) Create stream
I0819 00:45:19.614702       7 log.go:172] (0x4002f40fd0) (0x40034f81e0) Stream added, broadcasting: 1
I0819 00:45:19.618224       7 log.go:172] (0x4002f40fd0) Reply frame received for 1
I0819 00:45:19.618421       7 log.go:172] (0x4002f40fd0) (0x40016e23c0) Create stream
I0819 00:45:19.618507       7 log.go:172] (0x4002f40fd0) (0x40016e23c0) Stream added, broadcasting: 3
I0819 00:45:19.620236       7 log.go:172] (0x4002f40fd0) Reply frame received for 3
I0819 00:45:19.620388       7 log.go:172] (0x4002f40fd0) (0x40016e2460) Create stream
I0819 00:45:19.620454       7 log.go:172] (0x4002f40fd0) (0x40016e2460) Stream added, broadcasting: 5
I0819 00:45:19.621738       7 log.go:172] (0x4002f40fd0) Reply frame received for 5
I0819 00:45:19.692876       7 log.go:172] (0x4002f40fd0) Data frame received for 3
I0819 00:45:19.693098       7 log.go:172] (0x40016e23c0) (3) Data frame handling
I0819 00:45:19.693313       7 log.go:172] (0x40016e23c0) (3) Data frame sent
I0819 00:45:19.693455       7 log.go:172] (0x4002f40fd0) Data frame received for 3
I0819 00:45:19.693575       7 log.go:172] (0x40016e23c0) (3) Data frame handling
I0819 00:45:19.693762       7 log.go:172] (0x4002f40fd0) Data frame received for 5
I0819 00:45:19.693903       7 log.go:172] (0x40016e2460) (5) Data frame handling
I0819 00:45:19.695039       7 log.go:172] (0x4002f40fd0) Data frame received for 1
I0819 00:45:19.695147       7 log.go:172] (0x40034f81e0) (1) Data frame handling
I0819 00:45:19.695247       7 log.go:172] (0x40034f81e0) (1) Data frame sent
I0819 00:45:19.695347       7 log.go:172] (0x4002f40fd0) (0x40034f81e0) Stream removed, broadcasting: 1
I0819 00:45:19.695457       7 log.go:172] (0x4002f40fd0) Go away received
I0819 00:45:19.695949       7 log.go:172] (0x4002f40fd0) (0x40034f81e0) Stream removed, broadcasting: 1
I0819 00:45:19.696034       7 log.go:172] (0x4002f40fd0) (0x40016e23c0) Stream removed, broadcasting: 3
I0819 00:45:19.696103       7 log.go:172] (0x4002f40fd0) (0x40016e2460) Stream removed, broadcasting: 5
Aug 19 00:45:19.696: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:45:19.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8834" for this suite.
Aug 19 00:45:43.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:45:43.928: INFO: namespace pod-network-test-8834 deletion completed in 24.221838662s

• [SLOW TEST:46.897 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:45:43.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-167a8731-c92c-4c74-8fc4-a96b43fc2187
STEP: Creating configMap with name cm-test-opt-upd-37969bc4-8a15-49e4-84a2-ea5387cbb85d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-167a8731-c92c-4c74-8fc4-a96b43fc2187
STEP: Updating configmap cm-test-opt-upd-37969bc4-8a15-49e4-84a2-ea5387cbb85d
STEP: Creating configMap with name cm-test-opt-create-5d850219-659a-48aa-a262-d46f61a107cb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:45:58.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6502" for this suite.
Aug 19 00:46:22.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:46:22.570: INFO: namespace configmap-6502 deletion completed in 24.16033139s

• [SLOW TEST:38.640 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:46:22.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Aug 19 00:46:22.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 19 00:46:23.986: INFO: stderr: ""
Aug 19 00:46:23.986: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:46:23.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4835" for this suite.
Aug 19 00:46:30.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:46:30.256: INFO: namespace kubectl-4835 deletion completed in 6.259269512s

• [SLOW TEST:7.685 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:46:30.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:47:30.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8173" for this suite.
Aug 19 00:47:52.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:47:52.582: INFO: namespace container-probe-8173 deletion completed in 22.204750756s

• [SLOW TEST:82.323 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:47:52.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-718317b9-d757-4125-9e4e-809f1a949d0e
STEP: Creating a pod to test consume secrets
Aug 19 00:47:52.688: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0" in namespace "projected-9098" to be "success or failure"
Aug 19 00:47:52.776: INFO: Pod "pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0": Phase="Pending", Reason="", readiness=false. Elapsed: 88.029311ms
Aug 19 00:47:54.783: INFO: Pod "pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095145391s
Aug 19 00:47:56.790: INFO: Pod "pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102315523s
STEP: Saw pod success
Aug 19 00:47:56.790: INFO: Pod "pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0" satisfied condition "success or failure"
Aug 19 00:47:56.796: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0 container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 00:47:56.833: INFO: Waiting for pod pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0 to disappear
Aug 19 00:47:56.849: INFO: Pod pod-projected-secrets-9d3e4a41-a924-496e-a81c-89b0cd8875a0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:47:56.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9098" for this suite.
Aug 19 00:48:02.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:48:03.123: INFO: namespace projected-9098 deletion completed in 6.201618533s

• [SLOW TEST:10.541 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:48:03.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:48:03.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-484" for this suite.
Aug 19 00:48:25.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:48:26.101: INFO: namespace pods-484 deletion completed in 22.740588848s

• [SLOW TEST:22.974 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:48:26.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:48:30.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1568" for this suite.
Aug 19 00:49:16.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:49:16.667: INFO: namespace kubelet-test-1568 deletion completed in 46.419241983s

• [SLOW TEST:50.566 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:49:16.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 19 00:49:16.826: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:16.833: INFO: Number of nodes with available pods: 0
Aug 19 00:49:16.833: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:49:17.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:17.851: INFO: Number of nodes with available pods: 0
Aug 19 00:49:17.851: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:49:18.859: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:18.864: INFO: Number of nodes with available pods: 0
Aug 19 00:49:18.864: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:49:19.844: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:19.849: INFO: Number of nodes with available pods: 0
Aug 19 00:49:19.849: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 00:49:20.844: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:20.870: INFO: Number of nodes with available pods: 2
Aug 19 00:49:20.871: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 19 00:49:20.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:21.068: INFO: Number of nodes with available pods: 1
Aug 19 00:49:21.068: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:49:22.152: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:22.157: INFO: Number of nodes with available pods: 1
Aug 19 00:49:22.157: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:49:23.078: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:23.084: INFO: Number of nodes with available pods: 1
Aug 19 00:49:23.084: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:49:24.105: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:24.112: INFO: Number of nodes with available pods: 1
Aug 19 00:49:24.112: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 00:49:25.080: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 00:49:25.085: INFO: Number of nodes with available pods: 2
Aug 19 00:49:25.085: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4115, will wait for the garbage collector to delete the pods
Aug 19 00:49:25.158: INFO: Deleting DaemonSet.extensions daemon-set took: 8.777675ms
Aug 19 00:49:25.459: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.741112ms
Aug 19 00:49:33.765: INFO: Number of nodes with available pods: 0
Aug 19 00:49:33.765: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 00:49:33.770: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4115/daemonsets","resourceVersion":"937936"},"items":null}

Aug 19 00:49:33.802: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4115/pods","resourceVersion":"937936"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:49:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4115" for this suite.
Aug 19 00:49:39.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:49:39.989: INFO: namespace daemonsets-4115 deletion completed in 6.15719958s

• [SLOW TEST:23.321 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:49:39.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:49:40.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6859" for this suite.
Aug 19 00:49:46.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:49:46.344: INFO: namespace kubelet-test-6859 deletion completed in 6.178456122s

• [SLOW TEST:6.354 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:49:46.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 19 00:49:46.415: INFO: Waiting up to 5m0s for pod "pod-621b9cd8-051e-4560-992d-cb564957745f" in namespace "emptydir-7809" to be "success or failure"
Aug 19 00:49:46.443: INFO: Pod "pod-621b9cd8-051e-4560-992d-cb564957745f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.866957ms
Aug 19 00:49:48.451: INFO: Pod "pod-621b9cd8-051e-4560-992d-cb564957745f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035154995s
Aug 19 00:49:50.457: INFO: Pod "pod-621b9cd8-051e-4560-992d-cb564957745f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04169868s
STEP: Saw pod success
Aug 19 00:49:50.458: INFO: Pod "pod-621b9cd8-051e-4560-992d-cb564957745f" satisfied condition "success or failure"
Aug 19 00:49:50.462: INFO: Trying to get logs from node iruya-worker pod pod-621b9cd8-051e-4560-992d-cb564957745f container test-container: 
STEP: delete the pod
Aug 19 00:49:50.497: INFO: Waiting for pod pod-621b9cd8-051e-4560-992d-cb564957745f to disappear
Aug 19 00:49:50.526: INFO: Pod pod-621b9cd8-051e-4560-992d-cb564957745f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:49:50.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7809" for this suite.
Aug 19 00:49:56.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:49:56.679: INFO: namespace emptydir-7809 deletion completed in 6.144825556s

• [SLOW TEST:10.332 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:49:56.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 00:49:56.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-117'
Aug 19 00:49:58.057: INFO: stderr: ""
Aug 19 00:49:58.057: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Aug 19 00:50:03.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-117 -o json'
Aug 19 00:50:04.341: INFO: stderr: ""
Aug 19 00:50:04.342: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-19T00:49:57Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-117\",\n        \"resourceVersion\": \"938066\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-117/pods/e2e-test-nginx-pod\",\n        \"uid\": \"b6526479-1585-4266-bef7-7726746fe8de\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-8sbrw\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-8sbrw\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-8sbrw\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T00:49:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T00:50:00Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T00:50:00Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-19T00:49:57Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://c9f10def73b75e9ceb951285b5dcf86feeb73ae83535058793c9e03e0b2d11f8\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-19T00:50:00Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.5\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.162\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-19T00:49:58Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 19 00:50:04.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-117'
Aug 19 00:50:05.979: INFO: stderr: ""
Aug 19 00:50:05.979: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Aug 19 00:50:06.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-117'
Aug 19 00:50:09.992: INFO: stderr: ""
Aug 19 00:50:09.992: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:50:09.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-117" for this suite.
Aug 19 00:50:16.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:50:16.186: INFO: namespace kubectl-117 deletion completed in 6.16143892s

• [SLOW TEST:19.505 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:50:16.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:50:16.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded" in namespace "downward-api-4628" to be "success or failure"
Aug 19 00:50:16.295: INFO: Pod "downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded": Phase="Pending", Reason="", readiness=false. Elapsed: 28.105089ms
Aug 19 00:50:18.302: INFO: Pod "downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035107371s
Aug 19 00:50:20.310: INFO: Pod "downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042945371s
STEP: Saw pod success
Aug 19 00:50:20.310: INFO: Pod "downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded" satisfied condition "success or failure"
Aug 19 00:50:20.315: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded container client-container: 
STEP: delete the pod
Aug 19 00:50:20.351: INFO: Waiting for pod downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded to disappear
Aug 19 00:50:20.427: INFO: Pod downwardapi-volume-e25aa78e-ef1a-466d-8f11-4632f921eded no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:50:20.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4628" for this suite.
Aug 19 00:50:26.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:50:26.619: INFO: namespace downward-api-4628 deletion completed in 6.182977921s

• [SLOW TEST:10.428 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:50:26.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 19 00:50:31.808: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:50:31.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2972" for this suite.
Aug 19 00:50:53.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:50:54.141: INFO: namespace replicaset-2972 deletion completed in 22.249778223s

• [SLOW TEST:27.519 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:50:54.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-11.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-11.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-11.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-11.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-11.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-11.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-11.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 41.222.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.222.41_udp@PTR;check="$$(dig +tcp +noall +answer +search 41.222.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.222.41_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-11.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-11.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-11.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-11.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-11.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-11.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-11.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-11.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 41.222.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.222.41_udp@PTR;check="$$(dig +tcp +noall +answer +search 41.222.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.222.41_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 00:51:00.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.389: INFO: Unable to read wheezy_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.393: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.397: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.422: INFO: Unable to read jessie_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.426: INFO: Unable to read jessie_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.430: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.434: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:00.455: INFO: Lookups using dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c failed for: [wheezy_udp@dns-test-service.dns-11.svc.cluster.local wheezy_tcp@dns-test-service.dns-11.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_udp@dns-test-service.dns-11.svc.cluster.local jessie_tcp@dns-test-service.dns-11.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local]

Aug 19 00:51:05.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.467: INFO: Unable to read wheezy_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.471: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.475: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.500: INFO: Unable to read jessie_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.504: INFO: Unable to read jessie_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.508: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.512: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:05.537: INFO: Lookups using dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c failed for: [wheezy_udp@dns-test-service.dns-11.svc.cluster.local wheezy_tcp@dns-test-service.dns-11.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_udp@dns-test-service.dns-11.svc.cluster.local jessie_tcp@dns-test-service.dns-11.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local]

Aug 19 00:51:10.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.467: INFO: Unable to read wheezy_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.472: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.476: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.503: INFO: Unable to read jessie_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.507: INFO: Unable to read jessie_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.511: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:10.541: INFO: Lookups using dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c failed for: [wheezy_udp@dns-test-service.dns-11.svc.cluster.local wheezy_tcp@dns-test-service.dns-11.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_udp@dns-test-service.dns-11.svc.cluster.local jessie_tcp@dns-test-service.dns-11.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local]

Aug 19 00:51:15.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.467: INFO: Unable to read wheezy_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.472: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.476: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.503: INFO: Unable to read jessie_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.507: INFO: Unable to read jessie_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.511: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:15.540: INFO: Lookups using dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c failed for: [wheezy_udp@dns-test-service.dns-11.svc.cluster.local wheezy_tcp@dns-test-service.dns-11.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_udp@dns-test-service.dns-11.svc.cluster.local jessie_tcp@dns-test-service.dns-11.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local]

Aug 19 00:51:20.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.466: INFO: Unable to read wheezy_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.471: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.475: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.501: INFO: Unable to read jessie_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.510: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.514: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:20.538: INFO: Lookups using dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c failed for: [wheezy_udp@dns-test-service.dns-11.svc.cluster.local wheezy_tcp@dns-test-service.dns-11.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_udp@dns-test-service.dns-11.svc.cluster.local jessie_tcp@dns-test-service.dns-11.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local]

Aug 19 00:51:25.463: INFO: Unable to read wheezy_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.468: INFO: Unable to read wheezy_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.472: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.477: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.509: INFO: Unable to read jessie_udp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.513: INFO: Unable to read jessie_tcp@dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.516: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.520: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local from pod dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c: the server could not find the requested resource (get pods dns-test-78483ae1-103e-449a-af81-2a119deca93c)
Aug 19 00:51:25.546: INFO: Lookups using dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c failed for: [wheezy_udp@dns-test-service.dns-11.svc.cluster.local wheezy_tcp@dns-test-service.dns-11.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_udp@dns-test-service.dns-11.svc.cluster.local jessie_tcp@dns-test-service.dns-11.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-11.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-11.svc.cluster.local]

Aug 19 00:51:30.542: INFO: DNS probes using dns-11/dns-test-78483ae1-103e-449a-af81-2a119deca93c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:51:31.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-11" for this suite.
Aug 19 00:51:37.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:51:37.766: INFO: namespace dns-11 deletion completed in 6.247769924s

• [SLOW TEST:43.624 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:51:37.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-5237
I0819 00:51:37.867368       7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5237, replica count: 1
I0819 00:51:38.919148       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 00:51:39.919936       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 00:51:40.920586       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 00:51:41.921387       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0819 00:51:42.921993       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 19 00:51:43.061: INFO: Created: latency-svc-6ztlb
Aug 19 00:51:43.077: INFO: Got endpoints: latency-svc-6ztlb [52.285183ms]
Aug 19 00:51:43.157: INFO: Created: latency-svc-n78lp
Aug 19 00:51:43.174: INFO: Got endpoints: latency-svc-n78lp [96.320904ms]
Aug 19 00:51:43.203: INFO: Created: latency-svc-9gjxx
Aug 19 00:51:43.217: INFO: Got endpoints: latency-svc-9gjxx [139.038004ms]
Aug 19 00:51:43.278: INFO: Created: latency-svc-t9v6s
Aug 19 00:51:43.281: INFO: Got endpoints: latency-svc-t9v6s [201.452051ms]
Aug 19 00:51:43.319: INFO: Created: latency-svc-f9bbh
Aug 19 00:51:43.343: INFO: Got endpoints: latency-svc-f9bbh [263.609075ms]
Aug 19 00:51:43.370: INFO: Created: latency-svc-tcjb5
Aug 19 00:51:43.433: INFO: Got endpoints: latency-svc-tcjb5 [355.138455ms]
Aug 19 00:51:43.462: INFO: Created: latency-svc-292d2
Aug 19 00:51:43.479: INFO: Got endpoints: latency-svc-292d2 [401.525608ms]
Aug 19 00:51:43.523: INFO: Created: latency-svc-62gnd
Aug 19 00:51:43.614: INFO: Got endpoints: latency-svc-62gnd [534.935453ms]
Aug 19 00:51:43.619: INFO: Created: latency-svc-55jl7
Aug 19 00:51:43.630: INFO: Got endpoints: latency-svc-55jl7 [551.640002ms]
Aug 19 00:51:43.657: INFO: Created: latency-svc-7wbb4
Aug 19 00:51:43.691: INFO: Got endpoints: latency-svc-7wbb4 [612.524561ms]
Aug 19 00:51:43.781: INFO: Created: latency-svc-fz8db
Aug 19 00:51:43.834: INFO: Created: latency-svc-nnd7q
Aug 19 00:51:43.834: INFO: Got endpoints: latency-svc-fz8db [755.09968ms]
Aug 19 00:51:43.943: INFO: Got endpoints: latency-svc-nnd7q [864.197938ms]
Aug 19 00:51:43.945: INFO: Created: latency-svc-vb8ws
Aug 19 00:51:43.967: INFO: Got endpoints: latency-svc-vb8ws [888.460181ms]
Aug 19 00:51:44.001: INFO: Created: latency-svc-wfl7l
Aug 19 00:51:44.015: INFO: Got endpoints: latency-svc-wfl7l [935.894036ms]
Aug 19 00:51:44.129: INFO: Created: latency-svc-njt78
Aug 19 00:51:44.137: INFO: Got endpoints: latency-svc-njt78 [1.059336384s]
Aug 19 00:51:44.188: INFO: Created: latency-svc-lrb5j
Aug 19 00:51:44.325: INFO: Got endpoints: latency-svc-lrb5j [1.242396059s]
Aug 19 00:51:44.333: INFO: Created: latency-svc-pgcxp
Aug 19 00:51:44.341: INFO: Got endpoints: latency-svc-pgcxp [1.165864793s]
Aug 19 00:51:44.374: INFO: Created: latency-svc-lkp9w
Aug 19 00:51:44.383: INFO: Got endpoints: latency-svc-lkp9w [1.165469586s]
Aug 19 00:51:44.512: INFO: Created: latency-svc-crmtd
Aug 19 00:51:44.527: INFO: Got endpoints: latency-svc-crmtd [1.246406472s]
Aug 19 00:51:44.574: INFO: Created: latency-svc-72t22
Aug 19 00:51:44.594: INFO: Got endpoints: latency-svc-72t22 [1.250669585s]
Aug 19 00:51:44.710: INFO: Created: latency-svc-jpstg
Aug 19 00:51:44.719: INFO: Got endpoints: latency-svc-jpstg [1.286084818s]
Aug 19 00:51:44.754: INFO: Created: latency-svc-dfqfb
Aug 19 00:51:44.792: INFO: Got endpoints: latency-svc-dfqfb [1.312272018s]
Aug 19 00:51:44.883: INFO: Created: latency-svc-qd2zg
Aug 19 00:51:44.886: INFO: Got endpoints: latency-svc-qd2zg [1.271617966s]
Aug 19 00:51:44.939: INFO: Created: latency-svc-d4qrp
Aug 19 00:51:44.954: INFO: Got endpoints: latency-svc-d4qrp [1.323960459s]
Aug 19 00:51:45.053: INFO: Created: latency-svc-slfft
Aug 19 00:51:45.057: INFO: Got endpoints: latency-svc-slfft [1.366098375s]
Aug 19 00:51:45.099: INFO: Created: latency-svc-d8xqf
Aug 19 00:51:45.117: INFO: Got endpoints: latency-svc-d8xqf [1.282725703s]
Aug 19 00:51:45.137: INFO: Created: latency-svc-9t667
Aug 19 00:51:45.212: INFO: Got endpoints: latency-svc-9t667 [1.269011189s]
Aug 19 00:51:45.262: INFO: Created: latency-svc-c2mlx
Aug 19 00:51:45.267: INFO: Got endpoints: latency-svc-c2mlx [1.299367371s]
Aug 19 00:51:45.301: INFO: Created: latency-svc-lr7xj
Aug 19 00:51:45.308: INFO: Got endpoints: latency-svc-lr7xj [1.292798656s]
Aug 19 00:51:45.349: INFO: Created: latency-svc-p5kj7
Aug 19 00:51:45.356: INFO: Got endpoints: latency-svc-p5kj7 [1.218096031s]
Aug 19 00:51:45.399: INFO: Created: latency-svc-c857x
Aug 19 00:51:45.410: INFO: Got endpoints: latency-svc-c857x [1.084258896s]
Aug 19 00:51:45.439: INFO: Created: latency-svc-w954h
Aug 19 00:51:45.440: INFO: Got endpoints: latency-svc-w954h [1.099192947s]
Aug 19 00:51:45.505: INFO: Created: latency-svc-gct7c
Aug 19 00:51:45.545: INFO: Got endpoints: latency-svc-gct7c [1.162186282s]
Aug 19 00:51:45.580: INFO: Created: latency-svc-jg7qp
Aug 19 00:51:45.603: INFO: Got endpoints: latency-svc-jg7qp [1.075534298s]
Aug 19 00:51:45.656: INFO: Created: latency-svc-8b5x9
Aug 19 00:51:45.665: INFO: Got endpoints: latency-svc-8b5x9 [1.071158694s]
Aug 19 00:51:45.714: INFO: Created: latency-svc-85n7h
Aug 19 00:51:45.742: INFO: Got endpoints: latency-svc-85n7h [1.022858297s]
Aug 19 00:51:45.841: INFO: Created: latency-svc-24jmp
Aug 19 00:51:45.843: INFO: Got endpoints: latency-svc-24jmp [1.050792827s]
Aug 19 00:51:45.892: INFO: Created: latency-svc-krhxk
Aug 19 00:51:45.904: INFO: Got endpoints: latency-svc-krhxk [1.018136037s]
Aug 19 00:51:45.932: INFO: Created: latency-svc-5wrqc
Aug 19 00:51:45.984: INFO: Got endpoints: latency-svc-5wrqc [1.029341129s]
Aug 19 00:51:45.997: INFO: Created: latency-svc-rkw4h
Aug 19 00:51:46.013: INFO: Got endpoints: latency-svc-rkw4h [955.738252ms]
Aug 19 00:51:46.069: INFO: Created: latency-svc-tqkgv
Aug 19 00:51:46.188: INFO: Got endpoints: latency-svc-tqkgv [1.07061823s]
Aug 19 00:51:46.191: INFO: Created: latency-svc-2mq5b
Aug 19 00:51:46.205: INFO: Got endpoints: latency-svc-2mq5b [992.370894ms]
Aug 19 00:51:46.281: INFO: Created: latency-svc-94h5r
Aug 19 00:51:46.349: INFO: Got endpoints: latency-svc-94h5r [1.08214038s]
Aug 19 00:51:46.361: INFO: Created: latency-svc-h7qws
Aug 19 00:51:46.392: INFO: Got endpoints: latency-svc-h7qws [1.083355189s]
Aug 19 00:51:46.420: INFO: Created: latency-svc-tbkxj
Aug 19 00:51:46.428: INFO: Got endpoints: latency-svc-tbkxj [1.071998014s]
Aug 19 00:51:46.453: INFO: Created: latency-svc-52vsr
Aug 19 00:51:46.525: INFO: Got endpoints: latency-svc-52vsr [1.115133164s]
Aug 19 00:51:46.529: INFO: Created: latency-svc-cxkqk
Aug 19 00:51:46.542: INFO: Got endpoints: latency-svc-cxkqk [1.102187237s]
Aug 19 00:51:46.599: INFO: Created: latency-svc-s5bc8
Aug 19 00:51:46.609: INFO: Got endpoints: latency-svc-s5bc8 [1.06328786s]
Aug 19 00:51:46.691: INFO: Created: latency-svc-fkrgb
Aug 19 00:51:46.694: INFO: Got endpoints: latency-svc-fkrgb [1.090959215s]
Aug 19 00:51:46.787: INFO: Created: latency-svc-wvkmx
Aug 19 00:51:46.858: INFO: Got endpoints: latency-svc-wvkmx [1.193080917s]
Aug 19 00:51:46.890: INFO: Created: latency-svc-tvl77
Aug 19 00:51:46.897: INFO: Got endpoints: latency-svc-tvl77 [1.154328656s]
Aug 19 00:51:46.923: INFO: Created: latency-svc-lpf48
Aug 19 00:51:46.933: INFO: Got endpoints: latency-svc-lpf48 [1.090185872s]
Aug 19 00:51:47.027: INFO: Created: latency-svc-v6s9k
Aug 19 00:51:47.029: INFO: Got endpoints: latency-svc-v6s9k [1.125252283s]
Aug 19 00:51:47.170: INFO: Created: latency-svc-6f8bc
Aug 19 00:51:47.173: INFO: Got endpoints: latency-svc-6f8bc [1.188633777s]
Aug 19 00:51:47.251: INFO: Created: latency-svc-jxs4v
Aug 19 00:51:47.265: INFO: Got endpoints: latency-svc-jxs4v [1.251793624s]
Aug 19 00:51:47.302: INFO: Created: latency-svc-256ng
Aug 19 00:51:47.313: INFO: Got endpoints: latency-svc-256ng [1.125389165s]
Aug 19 00:51:47.348: INFO: Created: latency-svc-bdqm2
Aug 19 00:51:47.367: INFO: Got endpoints: latency-svc-bdqm2 [1.162077254s]
Aug 19 00:51:47.451: INFO: Created: latency-svc-2zbgz
Aug 19 00:51:47.469: INFO: Got endpoints: latency-svc-2zbgz [1.11957522s]
Aug 19 00:51:47.496: INFO: Created: latency-svc-76nmr
Aug 19 00:51:47.505: INFO: Got endpoints: latency-svc-76nmr [1.113427182s]
Aug 19 00:51:47.525: INFO: Created: latency-svc-dwpcz
Aug 19 00:51:47.607: INFO: Got endpoints: latency-svc-dwpcz [1.178754076s]
Aug 19 00:51:47.638: INFO: Created: latency-svc-csz7j
Aug 19 00:51:47.680: INFO: Got endpoints: latency-svc-csz7j [1.154150158s]
Aug 19 00:51:47.769: INFO: Created: latency-svc-rljhq
Aug 19 00:51:47.820: INFO: Created: latency-svc-wfb6w
Aug 19 00:51:47.822: INFO: Got endpoints: latency-svc-rljhq [1.279352348s]
Aug 19 00:51:47.829: INFO: Got endpoints: latency-svc-wfb6w [1.220666336s]
Aug 19 00:51:47.854: INFO: Created: latency-svc-qqsrh
Aug 19 00:51:47.924: INFO: Got endpoints: latency-svc-qqsrh [1.22956614s]
Aug 19 00:51:47.952: INFO: Created: latency-svc-htt7p
Aug 19 00:51:47.962: INFO: Got endpoints: latency-svc-htt7p [1.103472804s]
Aug 19 00:51:47.983: INFO: Created: latency-svc-xk2kq
Aug 19 00:51:47.998: INFO: Got endpoints: latency-svc-xk2kq [1.101235245s]
Aug 19 00:51:48.105: INFO: Created: latency-svc-8nhf2
Aug 19 00:51:48.108: INFO: Got endpoints: latency-svc-8nhf2 [1.174349878s]
Aug 19 00:51:48.151: INFO: Created: latency-svc-l5769
Aug 19 00:51:48.278: INFO: Got endpoints: latency-svc-l5769 [1.248741834s]
Aug 19 00:51:48.281: INFO: Created: latency-svc-kt6xx
Aug 19 00:51:48.286: INFO: Got endpoints: latency-svc-kt6xx [1.113198951s]
Aug 19 00:51:48.318: INFO: Created: latency-svc-mnl76
Aug 19 00:51:48.324: INFO: Got endpoints: latency-svc-mnl76 [1.058602153s]
Aug 19 00:51:48.427: INFO: Created: latency-svc-r4pr2
Aug 19 00:51:48.429: INFO: Got endpoints: latency-svc-r4pr2 [1.115904283s]
Aug 19 00:51:48.479: INFO: Created: latency-svc-zthv7
Aug 19 00:51:48.509: INFO: Got endpoints: latency-svc-zthv7 [1.141939463s]
Aug 19 00:51:48.589: INFO: Created: latency-svc-nbqmp
Aug 19 00:51:48.600: INFO: Got endpoints: latency-svc-nbqmp [1.130468308s]
Aug 19 00:51:48.648: INFO: Created: latency-svc-4mgmc
Aug 19 00:51:48.672: INFO: Got endpoints: latency-svc-4mgmc [1.16663651s]
Aug 19 00:51:48.781: INFO: Created: latency-svc-lmzcl
Aug 19 00:51:48.784: INFO: Got endpoints: latency-svc-lmzcl [1.176793547s]
Aug 19 00:51:48.955: INFO: Created: latency-svc-n8hmz
Aug 19 00:51:48.959: INFO: Got endpoints: latency-svc-n8hmz [1.279040761s]
Aug 19 00:51:49.002: INFO: Created: latency-svc-qw8zn
Aug 19 00:51:49.015: INFO: Got endpoints: latency-svc-qw8zn [1.192743325s]
Aug 19 00:51:49.051: INFO: Created: latency-svc-98lph
Aug 19 00:51:49.128: INFO: Got endpoints: latency-svc-98lph [1.298242283s]
Aug 19 00:51:49.132: INFO: Created: latency-svc-dsqql
Aug 19 00:51:49.142: INFO: Got endpoints: latency-svc-dsqql [1.217058937s]
Aug 19 00:51:49.226: INFO: Created: latency-svc-z57kf
Aug 19 00:51:49.290: INFO: Got endpoints: latency-svc-z57kf [1.32745319s]
Aug 19 00:51:49.293: INFO: Created: latency-svc-dfm9z
Aug 19 00:51:49.297: INFO: Got endpoints: latency-svc-dfm9z [1.298744147s]
Aug 19 00:51:49.348: INFO: Created: latency-svc-czsl9
Aug 19 00:51:49.376: INFO: Got endpoints: latency-svc-czsl9 [1.267600146s]
Aug 19 00:51:49.453: INFO: Created: latency-svc-gr2zw
Aug 19 00:51:49.471: INFO: Got endpoints: latency-svc-gr2zw [1.193124146s]
Aug 19 00:51:49.516: INFO: Created: latency-svc-sd7jk
Aug 19 00:51:49.583: INFO: Got endpoints: latency-svc-sd7jk [1.296647929s]
Aug 19 00:51:49.621: INFO: Created: latency-svc-5w26f
Aug 19 00:51:49.641: INFO: Got endpoints: latency-svc-5w26f [1.317157915s]
Aug 19 00:51:49.794: INFO: Created: latency-svc-vr4pw
Aug 19 00:51:49.798: INFO: Got endpoints: latency-svc-vr4pw [1.368430879s]
Aug 19 00:51:49.855: INFO: Created: latency-svc-8glvh
Aug 19 00:51:49.869: INFO: Got endpoints: latency-svc-8glvh [1.359787737s]
Aug 19 00:51:49.962: INFO: Created: latency-svc-2th5l
Aug 19 00:51:49.964: INFO: Got endpoints: latency-svc-2th5l [1.363664121s]
Aug 19 00:51:50.023: INFO: Created: latency-svc-79mcl
Aug 19 00:51:50.038: INFO: Got endpoints: latency-svc-79mcl [1.365826952s]
Aug 19 00:51:50.158: INFO: Created: latency-svc-dnjxq
Aug 19 00:51:50.161: INFO: Got endpoints: latency-svc-dnjxq [1.377438128s]
Aug 19 00:51:50.254: INFO: Created: latency-svc-cvrrw
Aug 19 00:51:50.350: INFO: Got endpoints: latency-svc-cvrrw [1.390512819s]
Aug 19 00:51:50.353: INFO: Created: latency-svc-7q5tv
Aug 19 00:51:50.363: INFO: Got endpoints: latency-svc-7q5tv [1.347398207s]
Aug 19 00:51:50.395: INFO: Created: latency-svc-bvd9f
Aug 19 00:51:50.416: INFO: Got endpoints: latency-svc-bvd9f [1.287644642s]
Aug 19 00:51:50.542: INFO: Created: latency-svc-jw2mz
Aug 19 00:51:50.599: INFO: Got endpoints: latency-svc-jw2mz [1.456822843s]
Aug 19 00:51:50.703: INFO: Created: latency-svc-gcbps
Aug 19 00:51:50.705: INFO: Got endpoints: latency-svc-gcbps [1.41533279s]
Aug 19 00:51:50.841: INFO: Created: latency-svc-72xrr
Aug 19 00:51:50.855: INFO: Got endpoints: latency-svc-72xrr [1.557818998s]
Aug 19 00:51:50.898: INFO: Created: latency-svc-rff2t
Aug 19 00:51:50.908: INFO: Got endpoints: latency-svc-rff2t [1.532575975s]
Aug 19 00:51:50.932: INFO: Created: latency-svc-smvws
Aug 19 00:51:51.002: INFO: Got endpoints: latency-svc-smvws [1.530345531s]
Aug 19 00:51:51.037: INFO: Created: latency-svc-b52br
Aug 19 00:51:51.041: INFO: Got endpoints: latency-svc-b52br [1.457311963s]
Aug 19 00:51:51.182: INFO: Created: latency-svc-8kzvc
Aug 19 00:51:51.187: INFO: Got endpoints: latency-svc-8kzvc [1.545813442s]
Aug 19 00:51:51.245: INFO: Created: latency-svc-mwhvg
Aug 19 00:51:51.276: INFO: Got endpoints: latency-svc-mwhvg [1.478181463s]
Aug 19 00:51:51.338: INFO: Created: latency-svc-nmlx2
Aug 19 00:51:51.372: INFO: Got endpoints: latency-svc-nmlx2 [1.502239774s]
Aug 19 00:51:51.415: INFO: Created: latency-svc-h7qt8
Aug 19 00:51:51.426: INFO: Got endpoints: latency-svc-h7qt8 [1.462224731s]
Aug 19 00:51:51.511: INFO: Created: latency-svc-w9zwj
Aug 19 00:51:51.516: INFO: Got endpoints: latency-svc-w9zwj [1.477520133s]
Aug 19 00:51:51.546: INFO: Created: latency-svc-l54wp
Aug 19 00:51:51.571: INFO: Got endpoints: latency-svc-l54wp [1.409524965s]
Aug 19 00:51:51.609: INFO: Created: latency-svc-5q4f5
Aug 19 00:51:51.679: INFO: Got endpoints: latency-svc-5q4f5 [1.328741068s]
Aug 19 00:51:51.681: INFO: Created: latency-svc-h5n7x
Aug 19 00:51:51.697: INFO: Got endpoints: latency-svc-h5n7x [1.334775124s]
Aug 19 00:51:51.733: INFO: Created: latency-svc-cz9kx
Aug 19 00:51:51.739: INFO: Got endpoints: latency-svc-cz9kx [1.323193541s]
Aug 19 00:51:51.768: INFO: Created: latency-svc-qq5hz
Aug 19 00:51:51.864: INFO: Got endpoints: latency-svc-qq5hz [1.26533961s]
Aug 19 00:51:51.867: INFO: Created: latency-svc-ntvll
Aug 19 00:51:51.907: INFO: Got endpoints: latency-svc-ntvll [1.201487684s]
Aug 19 00:51:51.937: INFO: Created: latency-svc-mjr5s
Aug 19 00:51:52.086: INFO: Got endpoints: latency-svc-mjr5s [1.23024903s]
Aug 19 00:51:52.131: INFO: Created: latency-svc-dqxvb
Aug 19 00:51:52.170: INFO: Got endpoints: latency-svc-dqxvb [1.261413796s]
Aug 19 00:51:52.171: INFO: Created: latency-svc-ljhv7
Aug 19 00:51:52.235: INFO: Got endpoints: latency-svc-ljhv7 [1.232339647s]
Aug 19 00:51:52.261: INFO: Created: latency-svc-5jt4h
Aug 19 00:51:52.269: INFO: Got endpoints: latency-svc-5jt4h [1.22800284s]
Aug 19 00:51:52.314: INFO: Created: latency-svc-l5d4j
Aug 19 00:51:52.379: INFO: Got endpoints: latency-svc-l5d4j [1.191928818s]
Aug 19 00:51:52.410: INFO: Created: latency-svc-gkv89
Aug 19 00:51:52.419: INFO: Got endpoints: latency-svc-gkv89 [1.142495542s]
Aug 19 00:51:52.441: INFO: Created: latency-svc-5bjz2
Aug 19 00:51:52.464: INFO: Got endpoints: latency-svc-5bjz2 [1.092509169s]
Aug 19 00:51:52.533: INFO: Created: latency-svc-lvjz7
Aug 19 00:51:52.570: INFO: Got endpoints: latency-svc-lvjz7 [1.143435354s]
Aug 19 00:51:52.703: INFO: Created: latency-svc-zcbmr
Aug 19 00:51:52.706: INFO: Got endpoints: latency-svc-zcbmr [1.190246766s]
Aug 19 00:51:52.874: INFO: Created: latency-svc-stmx8
Aug 19 00:51:52.909: INFO: Got endpoints: latency-svc-stmx8 [1.337604153s]
Aug 19 00:51:52.909: INFO: Created: latency-svc-fbx8d
Aug 19 00:51:52.966: INFO: Got endpoints: latency-svc-fbx8d [1.287313245s]
Aug 19 00:51:53.016: INFO: Created: latency-svc-zk985
Aug 19 00:51:53.032: INFO: Got endpoints: latency-svc-zk985 [1.334199107s]
Aug 19 00:51:53.065: INFO: Created: latency-svc-scf7w
Aug 19 00:51:53.081: INFO: Got endpoints: latency-svc-scf7w [1.341827721s]
Aug 19 00:51:53.153: INFO: Created: latency-svc-9rp46
Aug 19 00:51:53.159: INFO: Got endpoints: latency-svc-9rp46 [1.293962034s]
Aug 19 00:51:53.195: INFO: Created: latency-svc-8kdt4
Aug 19 00:51:53.213: INFO: Got endpoints: latency-svc-8kdt4 [1.305909604s]
Aug 19 00:51:53.233: INFO: Created: latency-svc-wb8n6
Aug 19 00:51:53.238: INFO: Got endpoints: latency-svc-wb8n6 [1.151299863s]
Aug 19 00:51:53.340: INFO: Created: latency-svc-nm4vr
Aug 19 00:51:53.374: INFO: Got endpoints: latency-svc-nm4vr [1.203989943s]
Aug 19 00:51:53.395: INFO: Created: latency-svc-l9mqv
Aug 19 00:51:53.411: INFO: Got endpoints: latency-svc-l9mqv [1.176510019s]
Aug 19 00:51:53.517: INFO: Created: latency-svc-cdrx9
Aug 19 00:51:53.549: INFO: Got endpoints: latency-svc-cdrx9 [1.280034098s]
Aug 19 00:51:53.550: INFO: Created: latency-svc-czl2d
Aug 19 00:51:53.575: INFO: Got endpoints: latency-svc-czl2d [1.195491184s]
Aug 19 00:51:53.611: INFO: Created: latency-svc-r6trg
Aug 19 00:51:53.678: INFO: Got endpoints: latency-svc-r6trg [1.259315543s]
Aug 19 00:51:53.681: INFO: Created: latency-svc-7v8kh
Aug 19 00:51:53.695: INFO: Got endpoints: latency-svc-7v8kh [1.230026711s]
Aug 19 00:51:53.729: INFO: Created: latency-svc-gv8zm
Aug 19 00:51:53.743: INFO: Got endpoints: latency-svc-gv8zm [1.172723785s]
Aug 19 00:51:53.773: INFO: Created: latency-svc-ztrcr
Aug 19 00:51:53.852: INFO: Got endpoints: latency-svc-ztrcr [1.145777281s]
Aug 19 00:51:53.879: INFO: Created: latency-svc-pwhlh
Aug 19 00:51:53.899: INFO: Got endpoints: latency-svc-pwhlh [990.36408ms]
Aug 19 00:51:53.934: INFO: Created: latency-svc-84ndl
Aug 19 00:51:53.948: INFO: Got endpoints: latency-svc-84ndl [981.20029ms]
Aug 19 00:51:54.008: INFO: Created: latency-svc-68r4q
Aug 19 00:51:54.013: INFO: Got endpoints: latency-svc-68r4q [981.460459ms]
Aug 19 00:51:54.046: INFO: Created: latency-svc-6fw55
Aug 19 00:51:54.062: INFO: Got endpoints: latency-svc-6fw55 [980.982721ms]
Aug 19 00:51:54.111: INFO: Created: latency-svc-jt2cm
Aug 19 00:51:54.200: INFO: Got endpoints: latency-svc-jt2cm [1.041446789s]
Aug 19 00:51:54.202: INFO: Created: latency-svc-ccggv
Aug 19 00:51:54.219: INFO: Got endpoints: latency-svc-ccggv [1.005686767s]
Aug 19 00:51:54.257: INFO: Created: latency-svc-qhqng
Aug 19 00:51:54.273: INFO: Got endpoints: latency-svc-qhqng [1.03493759s]
Aug 19 00:51:54.379: INFO: Created: latency-svc-wgk4t
Aug 19 00:51:54.388: INFO: Got endpoints: latency-svc-wgk4t [1.013086381s]
Aug 19 00:51:54.415: INFO: Created: latency-svc-vchcv
Aug 19 00:51:54.436: INFO: Got endpoints: latency-svc-vchcv [1.024302197s]
Aug 19 00:51:54.467: INFO: Created: latency-svc-qh4pv
Aug 19 00:51:54.530: INFO: Got endpoints: latency-svc-qh4pv [980.40381ms]
Aug 19 00:51:54.551: INFO: Created: latency-svc-lvdkn
Aug 19 00:51:54.566: INFO: Got endpoints: latency-svc-lvdkn [990.448615ms]
Aug 19 00:51:54.590: INFO: Created: latency-svc-b6s6g
Aug 19 00:51:54.608: INFO: Got endpoints: latency-svc-b6s6g [929.171029ms]
Aug 19 00:51:54.667: INFO: Created: latency-svc-ghr7l
Aug 19 00:51:54.670: INFO: Got endpoints: latency-svc-ghr7l [974.982757ms]
Aug 19 00:51:54.703: INFO: Created: latency-svc-rk89q
Aug 19 00:51:54.727: INFO: Got endpoints: latency-svc-rk89q [983.945358ms]
Aug 19 00:51:54.758: INFO: Created: latency-svc-9pmkg
Aug 19 00:51:54.835: INFO: Got endpoints: latency-svc-9pmkg [982.551986ms]
Aug 19 00:51:54.838: INFO: Created: latency-svc-h7fkt
Aug 19 00:51:54.842: INFO: Got endpoints: latency-svc-h7fkt [942.710193ms]
Aug 19 00:51:54.865: INFO: Created: latency-svc-5xj8x
Aug 19 00:51:54.880: INFO: Got endpoints: latency-svc-5xj8x [931.564472ms]
Aug 19 00:51:54.907: INFO: Created: latency-svc-wgtwh
Aug 19 00:51:54.929: INFO: Got endpoints: latency-svc-wgtwh [915.644405ms]
Aug 19 00:51:54.973: INFO: Created: latency-svc-62jtm
Aug 19 00:51:54.975: INFO: Got endpoints: latency-svc-62jtm [912.462365ms]
Aug 19 00:51:55.008: INFO: Created: latency-svc-rv99p
Aug 19 00:51:55.024: INFO: Got endpoints: latency-svc-rv99p [823.374599ms]
Aug 19 00:51:55.051: INFO: Created: latency-svc-6kxbx
Aug 19 00:51:55.122: INFO: Got endpoints: latency-svc-6kxbx [902.445135ms]
Aug 19 00:51:55.124: INFO: Created: latency-svc-7kfrd
Aug 19 00:51:55.145: INFO: Got endpoints: latency-svc-7kfrd [871.609829ms]
Aug 19 00:51:55.175: INFO: Created: latency-svc-xpxnb
Aug 19 00:51:55.205: INFO: Got endpoints: latency-svc-xpxnb [817.598573ms]
Aug 19 00:51:55.259: INFO: Created: latency-svc-8vztv
Aug 19 00:51:55.262: INFO: Got endpoints: latency-svc-8vztv [825.548244ms]
Aug 19 00:51:55.303: INFO: Created: latency-svc-k2f2b
Aug 19 00:51:55.326: INFO: Got endpoints: latency-svc-k2f2b [795.665473ms]
Aug 19 00:51:55.459: INFO: Created: latency-svc-t6w85
Aug 19 00:51:55.461: INFO: Got endpoints: latency-svc-t6w85 [894.59445ms]
Aug 19 00:51:55.605: INFO: Created: latency-svc-lm6kj
Aug 19 00:51:55.621: INFO: Got endpoints: latency-svc-lm6kj [1.012643652s]
Aug 19 00:51:55.640: INFO: Created: latency-svc-jsflt
Aug 19 00:51:55.656: INFO: Got endpoints: latency-svc-jsflt [985.938637ms]
Aug 19 00:51:55.676: INFO: Created: latency-svc-b6njl
Aug 19 00:51:55.686: INFO: Got endpoints: latency-svc-b6njl [959.151998ms]
Aug 19 00:51:55.741: INFO: Created: latency-svc-9r9t9
Aug 19 00:51:55.742: INFO: Got endpoints: latency-svc-9r9t9 [905.918583ms]
Aug 19 00:51:55.796: INFO: Created: latency-svc-r7h2q
Aug 19 00:51:55.812: INFO: Got endpoints: latency-svc-r7h2q [969.911531ms]
Aug 19 00:51:55.973: INFO: Created: latency-svc-kcnpm
Aug 19 00:51:55.976: INFO: Got endpoints: latency-svc-kcnpm [1.096410547s]
Aug 19 00:51:56.004: INFO: Created: latency-svc-9r9cx
Aug 19 00:51:56.029: INFO: Got endpoints: latency-svc-9r9cx [1.099623537s]
Aug 19 00:51:56.633: INFO: Created: latency-svc-z7l77
Aug 19 00:51:56.678: INFO: Got endpoints: latency-svc-z7l77 [1.702921178s]
Aug 19 00:51:56.721: INFO: Created: latency-svc-x9jcl
Aug 19 00:51:56.962: INFO: Got endpoints: latency-svc-x9jcl [1.93711969s]
Aug 19 00:51:57.051: INFO: Created: latency-svc-kw6hz
Aug 19 00:51:57.060: INFO: Got endpoints: latency-svc-kw6hz [1.937853717s]
Aug 19 00:51:57.206: INFO: Created: latency-svc-9d8dn
Aug 19 00:51:57.209: INFO: Got endpoints: latency-svc-9d8dn [2.064396201s]
Aug 19 00:51:57.254: INFO: Created: latency-svc-8nv2w
Aug 19 00:51:57.270: INFO: Got endpoints: latency-svc-8nv2w [2.064607838s]
Aug 19 00:51:57.356: INFO: Created: latency-svc-q28z8
Aug 19 00:51:57.358: INFO: Got endpoints: latency-svc-q28z8 [2.096649088s]
Aug 19 00:51:57.417: INFO: Created: latency-svc-h67zr
Aug 19 00:51:57.434: INFO: Got endpoints: latency-svc-h67zr [2.107568362s]
Aug 19 00:51:57.535: INFO: Created: latency-svc-rvsnv
Aug 19 00:51:57.537: INFO: Got endpoints: latency-svc-rvsnv [2.0759486s]
Aug 19 00:51:57.571: INFO: Created: latency-svc-dzhd9
Aug 19 00:51:57.615: INFO: Got endpoints: latency-svc-dzhd9 [1.993767536s]
Aug 19 00:51:57.872: INFO: Created: latency-svc-jrbjw
Aug 19 00:51:57.877: INFO: Got endpoints: latency-svc-jrbjw [2.220458815s]
Aug 19 00:51:57.963: INFO: Created: latency-svc-zdkg7
Aug 19 00:51:58.021: INFO: Got endpoints: latency-svc-zdkg7 [2.333970687s]
Aug 19 00:51:58.035: INFO: Created: latency-svc-c88wh
Aug 19 00:51:58.052: INFO: Got endpoints: latency-svc-c88wh [2.30987198s]
Aug 19 00:51:58.083: INFO: Created: latency-svc-5khcj
Aug 19 00:51:58.099: INFO: Got endpoints: latency-svc-5khcj [2.28686937s]
Aug 19 00:51:58.183: INFO: Created: latency-svc-mbjfs
Aug 19 00:51:58.185: INFO: Got endpoints: latency-svc-mbjfs [2.20859658s]
Aug 19 00:51:58.219: INFO: Created: latency-svc-7wfnd
Aug 19 00:51:58.257: INFO: Got endpoints: latency-svc-7wfnd [2.22768599s]
Aug 19 00:51:58.332: INFO: Created: latency-svc-s4qzw
Aug 19 00:51:58.363: INFO: Created: latency-svc-ht2fl
Aug 19 00:51:58.363: INFO: Got endpoints: latency-svc-s4qzw [1.684685099s]
Aug 19 00:51:58.376: INFO: Got endpoints: latency-svc-ht2fl [1.414147867s]
Aug 19 00:51:58.399: INFO: Created: latency-svc-rwx92
Aug 19 00:51:58.413: INFO: Got endpoints: latency-svc-rwx92 [1.352844245s]
Aug 19 00:51:58.553: INFO: Created: latency-svc-h2g4h
Aug 19 00:51:58.556: INFO: Got endpoints: latency-svc-h2g4h [1.346349416s]
Aug 19 00:51:58.776: INFO: Created: latency-svc-rqg46
Aug 19 00:51:58.827: INFO: Got endpoints: latency-svc-rqg46 [1.556644437s]
Aug 19 00:51:58.871: INFO: Created: latency-svc-944dm
Aug 19 00:51:58.912: INFO: Got endpoints: latency-svc-944dm [1.553365264s]
Aug 19 00:51:58.939: INFO: Created: latency-svc-k5s8d
Aug 19 00:51:58.983: INFO: Got endpoints: latency-svc-k5s8d [1.549085224s]
Aug 19 00:51:59.056: INFO: Created: latency-svc-gqk5j
Aug 19 00:51:59.059: INFO: Got endpoints: latency-svc-gqk5j [1.521865645s]
Aug 19 00:51:59.113: INFO: Created: latency-svc-m9zxn
Aug 19 00:51:59.205: INFO: Got endpoints: latency-svc-m9zxn [1.590451777s]
Aug 19 00:51:59.253: INFO: Created: latency-svc-d44wz
Aug 19 00:51:59.278: INFO: Got endpoints: latency-svc-d44wz [1.400951056s]
Aug 19 00:51:59.359: INFO: Created: latency-svc-w22k9
Aug 19 00:51:59.443: INFO: Got endpoints: latency-svc-w22k9 [1.422566337s]
Aug 19 00:51:59.530: INFO: Created: latency-svc-gjxvp
Aug 19 00:51:59.541: INFO: Got endpoints: latency-svc-gjxvp [1.489465815s]
Aug 19 00:51:59.624: INFO: Created: latency-svc-jbrqn
Aug 19 00:51:59.678: INFO: Got endpoints: latency-svc-jbrqn [1.578824406s]
Aug 19 00:51:59.740: INFO: Created: latency-svc-jc762
Aug 19 00:51:59.752: INFO: Got endpoints: latency-svc-jc762 [1.566424803s]
Aug 19 00:51:59.823: INFO: Created: latency-svc-lbzwh
Aug 19 00:51:59.830: INFO: Got endpoints: latency-svc-lbzwh [1.572865509s]
Aug 19 00:51:59.871: INFO: Created: latency-svc-92tl8
Aug 19 00:51:59.884: INFO: Got endpoints: latency-svc-92tl8 [1.520913825s]
Aug 19 00:51:59.910: INFO: Created: latency-svc-wvrwj
Aug 19 00:51:59.978: INFO: Got endpoints: latency-svc-wvrwj [1.602010847s]
Aug 19 00:51:59.981: INFO: Created: latency-svc-z2cwc
Aug 19 00:51:59.999: INFO: Got endpoints: latency-svc-z2cwc [1.58597256s]
Aug 19 00:52:00.065: INFO: Created: latency-svc-cqzxw
Aug 19 00:52:00.140: INFO: Got endpoints: latency-svc-cqzxw [1.583480832s]
Aug 19 00:52:00.142: INFO: Latencies: [96.320904ms 139.038004ms 201.452051ms 263.609075ms 355.138455ms 401.525608ms 534.935453ms 551.640002ms 612.524561ms 755.09968ms 795.665473ms 817.598573ms 823.374599ms 825.548244ms 864.197938ms 871.609829ms 888.460181ms 894.59445ms 902.445135ms 905.918583ms 912.462365ms 915.644405ms 929.171029ms 931.564472ms 935.894036ms 942.710193ms 955.738252ms 959.151998ms 969.911531ms 974.982757ms 980.40381ms 980.982721ms 981.20029ms 981.460459ms 982.551986ms 983.945358ms 985.938637ms 990.36408ms 990.448615ms 992.370894ms 1.005686767s 1.012643652s 1.013086381s 1.018136037s 1.022858297s 1.024302197s 1.029341129s 1.03493759s 1.041446789s 1.050792827s 1.058602153s 1.059336384s 1.06328786s 1.07061823s 1.071158694s 1.071998014s 1.075534298s 1.08214038s 1.083355189s 1.084258896s 1.090185872s 1.090959215s 1.092509169s 1.096410547s 1.099192947s 1.099623537s 1.101235245s 1.102187237s 1.103472804s 1.113198951s 1.113427182s 1.115133164s 1.115904283s 1.11957522s 1.125252283s 1.125389165s 1.130468308s 1.141939463s 1.142495542s 1.143435354s 1.145777281s 1.151299863s 1.154150158s 1.154328656s 1.162077254s 1.162186282s 1.165469586s 1.165864793s 1.16663651s 1.172723785s 1.174349878s 1.176510019s 1.176793547s 1.178754076s 1.188633777s 1.190246766s 1.191928818s 1.192743325s 1.193080917s 1.193124146s 1.195491184s 1.201487684s 1.203989943s 1.217058937s 1.218096031s 1.220666336s 1.22800284s 1.22956614s 1.230026711s 1.23024903s 1.232339647s 1.242396059s 1.246406472s 1.248741834s 1.250669585s 1.251793624s 1.259315543s 1.261413796s 1.26533961s 1.267600146s 1.269011189s 1.271617966s 1.279040761s 1.279352348s 1.280034098s 1.282725703s 1.286084818s 1.287313245s 1.287644642s 1.292798656s 1.293962034s 1.296647929s 1.298242283s 1.298744147s 1.299367371s 1.305909604s 1.312272018s 1.317157915s 1.323193541s 1.323960459s 1.32745319s 1.328741068s 1.334199107s 1.334775124s 1.337604153s 1.341827721s 1.346349416s 1.347398207s 1.352844245s 1.359787737s 1.363664121s 1.365826952s 1.366098375s 1.368430879s 1.377438128s 1.390512819s 1.400951056s 1.409524965s 1.414147867s 1.41533279s 1.422566337s 1.456822843s 1.457311963s 1.462224731s 1.477520133s 1.478181463s 1.489465815s 1.502239774s 1.520913825s 1.521865645s 1.530345531s 1.532575975s 1.545813442s 1.549085224s 1.553365264s 1.556644437s 1.557818998s 1.566424803s 1.572865509s 1.578824406s 1.583480832s 1.58597256s 1.590451777s 1.602010847s 1.684685099s 1.702921178s 1.93711969s 1.937853717s 1.993767536s 2.064396201s 2.064607838s 2.0759486s 2.096649088s 2.107568362s 2.20859658s 2.220458815s 2.22768599s 2.28686937s 2.30987198s 2.333970687s]
Aug 19 00:52:00.144: INFO: 50 %ile: 1.195491184s
Aug 19 00:52:00.144: INFO: 90 %ile: 1.583480832s
Aug 19 00:52:00.144: INFO: 99 %ile: 2.30987198s
Aug 19 00:52:00.144: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:52:00.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5237" for this suite.
Aug 19 00:52:48.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:52:49.286: INFO: namespace svc-latency-5237 deletion completed in 49.134770559s

• [SLOW TEST:71.515 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:52:49.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 00:52:49.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886" in namespace "downward-api-9550" to be "success or failure"
Aug 19 00:52:49.891: INFO: Pod "downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886": Phase="Pending", Reason="", readiness=false. Elapsed: 63.288316ms
Aug 19 00:52:51.974: INFO: Pod "downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146904037s
Aug 19 00:52:54.139: INFO: Pod "downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311763096s
Aug 19 00:52:56.175: INFO: Pod "downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886": Phase="Running", Reason="", readiness=true. Elapsed: 6.347835454s
Aug 19 00:52:58.182: INFO: Pod "downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.354710356s
STEP: Saw pod success
Aug 19 00:52:58.182: INFO: Pod "downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886" satisfied condition "success or failure"
Aug 19 00:52:58.244: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886 container client-container: 
STEP: delete the pod
Aug 19 00:52:58.414: INFO: Waiting for pod downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886 to disappear
Aug 19 00:52:58.485: INFO: Pod downwardapi-volume-6b80253a-be05-4c91-9b18-a8560076b886 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:52:58.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9550" for this suite.
Aug 19 00:53:05.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:53:05.215: INFO: namespace downward-api-9550 deletion completed in 6.453953872s

• [SLOW TEST:15.925 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:53:05.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 19 00:53:05.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5494,SelfLink:/api/v1/namespaces/watch-5494/configmaps/e2e-watch-test-label-changed,UID:45326855-fe03-4e06-8a4b-7de05324b75c,ResourceVersion:940357,Generation:0,CreationTimestamp:2020-08-19 00:53:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 00:53:05.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5494,SelfLink:/api/v1/namespaces/watch-5494/configmaps/e2e-watch-test-label-changed,UID:45326855-fe03-4e06-8a4b-7de05324b75c,ResourceVersion:940358,Generation:0,CreationTimestamp:2020-08-19 00:53:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 19 00:53:05.302: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5494,SelfLink:/api/v1/namespaces/watch-5494/configmaps/e2e-watch-test-label-changed,UID:45326855-fe03-4e06-8a4b-7de05324b75c,ResourceVersion:940359,Generation:0,CreationTimestamp:2020-08-19 00:53:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 19 00:53:15.374: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5494,SelfLink:/api/v1/namespaces/watch-5494/configmaps/e2e-watch-test-label-changed,UID:45326855-fe03-4e06-8a4b-7de05324b75c,ResourceVersion:940382,Generation:0,CreationTimestamp:2020-08-19 00:53:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 00:53:15.375: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5494,SelfLink:/api/v1/namespaces/watch-5494/configmaps/e2e-watch-test-label-changed,UID:45326855-fe03-4e06-8a4b-7de05324b75c,ResourceVersion:940383,Generation:0,CreationTimestamp:2020-08-19 00:53:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 19 00:53:15.376: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5494,SelfLink:/api/v1/namespaces/watch-5494/configmaps/e2e-watch-test-label-changed,UID:45326855-fe03-4e06-8a4b-7de05324b75c,ResourceVersion:940384,Generation:0,CreationTimestamp:2020-08-19 00:53:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:53:15.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5494" for this suite.
Aug 19 00:53:21.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:53:21.546: INFO: namespace watch-5494 deletion completed in 6.162811548s

• [SLOW TEST:16.329 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:53:21.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 in namespace container-probe-5823
Aug 19 00:53:25.746: INFO: Started pod liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 in namespace container-probe-5823
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 00:53:25.751: INFO: Initial restart count of pod liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 is 0
Aug 19 00:53:43.830: INFO: Restart count of pod container-probe-5823/liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 is now 1 (18.079233186s elapsed)
Aug 19 00:54:03.958: INFO: Restart count of pod container-probe-5823/liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 is now 2 (38.206711064s elapsed)
Aug 19 00:54:28.110: INFO: Restart count of pod container-probe-5823/liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 is now 3 (1m2.359052775s elapsed)
Aug 19 00:54:44.162: INFO: Restart count of pod container-probe-5823/liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 is now 4 (1m18.411168632s elapsed)
Aug 19 00:55:56.655: INFO: Restart count of pod container-probe-5823/liveness-b5b316a7-448c-4e1a-ad43-771937f3ba56 is now 5 (2m30.904154369s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:55:56.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5823" for this suite.
Aug 19 00:56:03.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:56:03.326: INFO: namespace container-probe-5823 deletion completed in 6.389570158s

• [SLOW TEST:161.777 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:56:03.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-13a05116-199c-4afc-b6f0-0053fefba818
STEP: Creating a pod to test consume configMaps
Aug 19 00:56:03.683: INFO: Waiting up to 5m0s for pod "pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3" in namespace "configmap-6981" to be "success or failure"
Aug 19 00:56:03.786: INFO: Pod "pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3": Phase="Pending", Reason="", readiness=false. Elapsed: 102.597649ms
Aug 19 00:56:05.864: INFO: Pod "pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180149549s
Aug 19 00:56:07.871: INFO: Pod "pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3": Phase="Running", Reason="", readiness=true. Elapsed: 4.187120116s
Aug 19 00:56:09.878: INFO: Pod "pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194483706s
STEP: Saw pod success
Aug 19 00:56:09.878: INFO: Pod "pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3" satisfied condition "success or failure"
Aug 19 00:56:09.883: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3 container configmap-volume-test: 
STEP: delete the pod
Aug 19 00:56:09.910: INFO: Waiting for pod pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3 to disappear
Aug 19 00:56:09.914: INFO: Pod pod-configmaps-aff19511-4bbd-4cfe-9710-35abd1f035f3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:56:09.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6981" for this suite.
Aug 19 00:56:17.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:56:18.094: INFO: namespace configmap-6981 deletion completed in 8.171734326s

• [SLOW TEST:14.767 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:56:18.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4352137c-4d2a-40e7-8494-39588c26fa6d
STEP: Creating a pod to test consume configMaps
Aug 19 00:56:18.825: INFO: Waiting up to 5m0s for pod "pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f" in namespace "configmap-5802" to be "success or failure"
Aug 19 00:56:19.218: INFO: Pod "pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 392.210358ms
Aug 19 00:56:21.225: INFO: Pod "pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399127777s
Aug 19 00:56:23.231: INFO: Pod "pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.405731911s
STEP: Saw pod success
Aug 19 00:56:23.231: INFO: Pod "pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f" satisfied condition "success or failure"
Aug 19 00:56:23.235: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f container configmap-volume-test: 
STEP: delete the pod
Aug 19 00:56:23.276: INFO: Waiting for pod pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f to disappear
Aug 19 00:56:23.280: INFO: Pod pod-configmaps-863f26e4-3d1e-4751-90ac-4a4baa2f2a6f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:56:23.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5802" for this suite.
Aug 19 00:56:29.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:56:29.462: INFO: namespace configmap-5802 deletion completed in 6.174733442s

• [SLOW TEST:11.367 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:56:29.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:56:33.629: INFO: Waiting up to 5m0s for pod "client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623" in namespace "pods-5051" to be "success or failure"
Aug 19 00:56:33.690: INFO: Pod "client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623": Phase="Pending", Reason="", readiness=false. Elapsed: 60.966844ms
Aug 19 00:56:35.698: INFO: Pod "client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069470669s
Aug 19 00:56:37.706: INFO: Pod "client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077093894s
STEP: Saw pod success
Aug 19 00:56:37.706: INFO: Pod "client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623" satisfied condition "success or failure"
Aug 19 00:56:37.711: INFO: Trying to get logs from node iruya-worker pod client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623 container env3cont: 
STEP: delete the pod
Aug 19 00:56:37.760: INFO: Waiting for pod client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623 to disappear
Aug 19 00:56:37.772: INFO: Pod client-envvars-e739a6f9-9db8-4379-9e0c-2e53361f7623 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:56:37.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5051" for this suite.
Aug 19 00:57:23.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:57:23.954: INFO: namespace pods-5051 deletion completed in 46.171107927s

• [SLOW TEST:54.487 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:57:23.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-ec727da1-c43b-4aac-93b7-4f6edf2182b2
STEP: Creating a pod to test consume secrets
Aug 19 00:57:25.033: INFO: Waiting up to 5m0s for pod "pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e" in namespace "secrets-2512" to be "success or failure"
Aug 19 00:57:25.171: INFO: Pod "pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e": Phase="Pending", Reason="", readiness=false. Elapsed: 137.194048ms
Aug 19 00:57:27.178: INFO: Pod "pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144895561s
Aug 19 00:57:29.186: INFO: Pod "pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152390733s
Aug 19 00:57:31.192: INFO: Pod "pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e": Phase="Running", Reason="", readiness=true. Elapsed: 6.159046872s
Aug 19 00:57:33.198: INFO: Pod "pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.165081996s
STEP: Saw pod success
Aug 19 00:57:33.199: INFO: Pod "pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e" satisfied condition "success or failure"
Aug 19 00:57:33.202: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e container secret-volume-test: 
STEP: delete the pod
Aug 19 00:57:33.286: INFO: Waiting for pod pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e to disappear
Aug 19 00:57:33.311: INFO: Pod pod-secrets-36c3fa08-a6e6-4a52-9de1-24d7e52e318e no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:57:33.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2512" for this suite.
Aug 19 00:57:39.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:57:39.670: INFO: namespace secrets-2512 deletion completed in 6.349728125s

• [SLOW TEST:15.714 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:57:39.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-7bcc5749-42c1-4286-a1d1-9d8383fe0044
STEP: Creating a pod to test consume configMaps
Aug 19 00:57:40.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82" in namespace "configmap-5016" to be "success or failure"
Aug 19 00:57:40.298: INFO: Pod "pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82": Phase="Pending", Reason="", readiness=false. Elapsed: 72.338909ms
Aug 19 00:57:42.305: INFO: Pod "pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079363598s
Aug 19 00:57:44.347: INFO: Pod "pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122213214s
Aug 19 00:57:46.368: INFO: Pod "pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142376321s
STEP: Saw pod success
Aug 19 00:57:46.368: INFO: Pod "pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82" satisfied condition "success or failure"
Aug 19 00:57:46.378: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82 container configmap-volume-test: 
STEP: delete the pod
Aug 19 00:57:46.414: INFO: Waiting for pod pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82 to disappear
Aug 19 00:57:46.418: INFO: Pod pod-configmaps-fc7eca2b-d279-4673-87d4-a98fadaceb82 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:57:46.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5016" for this suite.
Aug 19 00:57:52.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:57:52.642: INFO: namespace configmap-5016 deletion completed in 6.213790874s

• [SLOW TEST:12.971 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:57:52.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-3f0feb32-0f58-4e3c-b3dc-291c94768302
STEP: Creating a pod to test consume secrets
Aug 19 00:57:52.836: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23" in namespace "projected-5396" to be "success or failure"
Aug 19 00:57:52.862: INFO: Pod "pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23": Phase="Pending", Reason="", readiness=false. Elapsed: 26.174027ms
Aug 19 00:57:54.869: INFO: Pod "pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033199742s
Aug 19 00:57:56.876: INFO: Pod "pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23": Phase="Running", Reason="", readiness=true. Elapsed: 4.039988186s
Aug 19 00:57:58.883: INFO: Pod "pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046819583s
STEP: Saw pod success
Aug 19 00:57:58.883: INFO: Pod "pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23" satisfied condition "success or failure"
Aug 19 00:57:58.889: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23 container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 00:57:59.006: INFO: Waiting for pod pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23 to disappear
Aug 19 00:57:59.013: INFO: Pod pod-projected-secrets-ea6a27a6-6d7d-4b59-b6ac-089af0c1aa23 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:57:59.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5396" for this suite.
Aug 19 00:58:05.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:58:05.176: INFO: namespace projected-5396 deletion completed in 6.155204742s

• [SLOW TEST:12.532 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:58:05.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:58:05.243: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:58:06.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3627" for this suite.
Aug 19 00:58:12.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:58:12.590: INFO: namespace custom-resource-definition-3627 deletion completed in 6.194348449s

• [SLOW TEST:7.413 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:58:12.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-f25ef7b0-4aab-4617-b109-f2bfd2ae5eac
STEP: Creating a pod to test consume secrets
Aug 19 00:58:12.749: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257" in namespace "projected-7847" to be "success or failure"
Aug 19 00:58:12.755: INFO: Pod "pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257": Phase="Pending", Reason="", readiness=false. Elapsed: 5.604742ms
Aug 19 00:58:14.768: INFO: Pod "pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018423905s
Aug 19 00:58:16.775: INFO: Pod "pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025450662s
STEP: Saw pod success
Aug 19 00:58:16.775: INFO: Pod "pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257" satisfied condition "success or failure"
Aug 19 00:58:16.781: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257 container secret-volume-test: 
STEP: delete the pod
Aug 19 00:58:16.809: INFO: Waiting for pod pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257 to disappear
Aug 19 00:58:16.814: INFO: Pod pod-projected-secrets-92cb5881-60cb-4633-bf1f-61d745020257 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:58:16.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7847" for this suite.
Aug 19 00:58:22.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:58:23.339: INFO: namespace projected-7847 deletion completed in 6.492159693s

• [SLOW TEST:10.747 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:58:23.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3616
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 19 00:58:24.004: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 19 00:58:52.395: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.62 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3616 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:58:52.395: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:58:52.471772       7 log.go:172] (0x400193ea50) (0x40030059a0) Create stream
I0819 00:58:52.471954       7 log.go:172] (0x400193ea50) (0x40030059a0) Stream added, broadcasting: 1
I0819 00:58:52.477565       7 log.go:172] (0x400193ea50) Reply frame received for 1
I0819 00:58:52.477723       7 log.go:172] (0x400193ea50) (0x40026c7220) Create stream
I0819 00:58:52.477790       7 log.go:172] (0x400193ea50) (0x40026c7220) Stream added, broadcasting: 3
I0819 00:58:52.479692       7 log.go:172] (0x400193ea50) Reply frame received for 3
I0819 00:58:52.479925       7 log.go:172] (0x400193ea50) (0x4003005a40) Create stream
I0819 00:58:52.480074       7 log.go:172] (0x400193ea50) (0x4003005a40) Stream added, broadcasting: 5
I0819 00:58:52.481566       7 log.go:172] (0x400193ea50) Reply frame received for 5
I0819 00:58:53.534085       7 log.go:172] (0x400193ea50) Data frame received for 3
I0819 00:58:53.534415       7 log.go:172] (0x40026c7220) (3) Data frame handling
I0819 00:58:53.534635       7 log.go:172] (0x40026c7220) (3) Data frame sent
I0819 00:58:53.534890       7 log.go:172] (0x400193ea50) Data frame received for 3
I0819 00:58:53.535015       7 log.go:172] (0x40026c7220) (3) Data frame handling
I0819 00:58:53.535210       7 log.go:172] (0x400193ea50) Data frame received for 5
I0819 00:58:53.535414       7 log.go:172] (0x4003005a40) (5) Data frame handling
I0819 00:58:53.536496       7 log.go:172] (0x400193ea50) Data frame received for 1
I0819 00:58:53.536705       7 log.go:172] (0x40030059a0) (1) Data frame handling
I0819 00:58:53.537025       7 log.go:172] (0x40030059a0) (1) Data frame sent
I0819 00:58:53.537213       7 log.go:172] (0x400193ea50) (0x40030059a0) Stream removed, broadcasting: 1
I0819 00:58:53.537408       7 log.go:172] (0x400193ea50) Go away received
I0819 00:58:53.537815       7 log.go:172] (0x400193ea50) (0x40030059a0) Stream removed, broadcasting: 1
I0819 00:58:53.538020       7 log.go:172] (0x400193ea50) (0x40026c7220) Stream removed, broadcasting: 3
I0819 00:58:53.538202       7 log.go:172] (0x400193ea50) (0x4003005a40) Stream removed, broadcasting: 5
Aug 19 00:58:53.538: INFO: Found all expected endpoints: [netserver-0]
Aug 19 00:58:53.543: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.168 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3616 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 00:58:53.543: INFO: >>> kubeConfig: /root/.kube/config
I0819 00:58:53.597366       7 log.go:172] (0x4001da6d10) (0x4001dfba40) Create stream
I0819 00:58:53.597533       7 log.go:172] (0x4001da6d10) (0x4001dfba40) Stream added, broadcasting: 1
I0819 00:58:53.602948       7 log.go:172] (0x4001da6d10) Reply frame received for 1
I0819 00:58:53.603206       7 log.go:172] (0x4001da6d10) (0x40026c72c0) Create stream
I0819 00:58:53.603363       7 log.go:172] (0x4001da6d10) (0x40026c72c0) Stream added, broadcasting: 3
I0819 00:58:53.605556       7 log.go:172] (0x4001da6d10) Reply frame received for 3
I0819 00:58:53.605758       7 log.go:172] (0x4001da6d10) (0x4002fed9a0) Create stream
I0819 00:58:53.605865       7 log.go:172] (0x4001da6d10) (0x4002fed9a0) Stream added, broadcasting: 5
I0819 00:58:53.607626       7 log.go:172] (0x4001da6d10) Reply frame received for 5
I0819 00:58:54.698640       7 log.go:172] (0x4001da6d10) Data frame received for 3
I0819 00:58:54.698785       7 log.go:172] (0x40026c72c0) (3) Data frame handling
I0819 00:58:54.698884       7 log.go:172] (0x40026c72c0) (3) Data frame sent
I0819 00:58:54.698982       7 log.go:172] (0x4001da6d10) Data frame received for 3
I0819 00:58:54.699081       7 log.go:172] (0x4001da6d10) Data frame received for 5
I0819 00:58:54.699268       7 log.go:172] (0x4002fed9a0) (5) Data frame handling
I0819 00:58:54.699424       7 log.go:172] (0x40026c72c0) (3) Data frame handling
I0819 00:58:54.700568       7 log.go:172] (0x4001da6d10) Data frame received for 1
I0819 00:58:54.700696       7 log.go:172] (0x4001dfba40) (1) Data frame handling
I0819 00:58:54.701026       7 log.go:172] (0x4001dfba40) (1) Data frame sent
I0819 00:58:54.701181       7 log.go:172] (0x4001da6d10) (0x4001dfba40) Stream removed, broadcasting: 1
I0819 00:58:54.701364       7 log.go:172] (0x4001da6d10) Go away received
I0819 00:58:54.701898       7 log.go:172] (0x4001da6d10) (0x4001dfba40) Stream removed, broadcasting: 1
I0819 00:58:54.702071       7 log.go:172] (0x4001da6d10) (0x40026c72c0) Stream removed, broadcasting: 3
I0819 00:58:54.702183       7 log.go:172] (0x4001da6d10) (0x4002fed9a0) Stream removed, broadcasting: 5
Aug 19 00:58:54.702: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:58:54.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3616" for this suite.
Aug 19 00:59:20.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:59:21.048: INFO: namespace pod-network-test-3616 deletion completed in 26.336752594s

• [SLOW TEST:57.705 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:59:21.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 19 00:59:21.383: INFO: Waiting up to 5m0s for pod "pod-f9713ac2-85a2-4339-b370-ad90a6411668" in namespace "emptydir-1652" to be "success or failure"
Aug 19 00:59:21.397: INFO: Pod "pod-f9713ac2-85a2-4339-b370-ad90a6411668": Phase="Pending", Reason="", readiness=false. Elapsed: 14.341488ms
Aug 19 00:59:23.404: INFO: Pod "pod-f9713ac2-85a2-4339-b370-ad90a6411668": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021222718s
Aug 19 00:59:25.411: INFO: Pod "pod-f9713ac2-85a2-4339-b370-ad90a6411668": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027658564s
Aug 19 00:59:27.418: INFO: Pod "pod-f9713ac2-85a2-4339-b370-ad90a6411668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034528583s
STEP: Saw pod success
Aug 19 00:59:27.418: INFO: Pod "pod-f9713ac2-85a2-4339-b370-ad90a6411668" satisfied condition "success or failure"
Aug 19 00:59:27.423: INFO: Trying to get logs from node iruya-worker2 pod pod-f9713ac2-85a2-4339-b370-ad90a6411668 container test-container: 
STEP: delete the pod
Aug 19 00:59:27.460: INFO: Waiting for pod pod-f9713ac2-85a2-4339-b370-ad90a6411668 to disappear
Aug 19 00:59:27.500: INFO: Pod pod-f9713ac2-85a2-4339-b370-ad90a6411668 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:59:27.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1652" for this suite.
Aug 19 00:59:33.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:59:33.803: INFO: namespace emptydir-1652 deletion completed in 6.293421905s

• [SLOW TEST:12.754 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:59:33.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8e14faa0-fcb1-424c-a3f2-192c3a3e922f
STEP: Creating a pod to test consume configMaps
Aug 19 00:59:33.963: INFO: Waiting up to 5m0s for pod "pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969" in namespace "configmap-3151" to be "success or failure"
Aug 19 00:59:34.011: INFO: Pod "pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969": Phase="Pending", Reason="", readiness=false. Elapsed: 47.196636ms
Aug 19 00:59:36.018: INFO: Pod "pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054626753s
Aug 19 00:59:38.027: INFO: Pod "pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969": Phase="Running", Reason="", readiness=true. Elapsed: 4.06332454s
Aug 19 00:59:40.053: INFO: Pod "pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089083341s
STEP: Saw pod success
Aug 19 00:59:40.053: INFO: Pod "pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969" satisfied condition "success or failure"
Aug 19 00:59:40.057: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969 container configmap-volume-test: 
STEP: delete the pod
Aug 19 00:59:40.109: INFO: Waiting for pod pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969 to disappear
Aug 19 00:59:40.149: INFO: Pod pod-configmaps-4bd9b1c6-8484-4b0c-8460-ede471669969 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:59:40.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3151" for this suite.
Aug 19 00:59:46.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:59:46.323: INFO: namespace configmap-3151 deletion completed in 6.139554068s

• [SLOW TEST:12.518 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:59:46.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-85e8ece1-9104-4a81-86b8-b1aa9fa23e43
STEP: Creating a pod to test consume secrets
Aug 19 00:59:46.467: INFO: Waiting up to 5m0s for pod "pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6" in namespace "secrets-9270" to be "success or failure"
Aug 19 00:59:46.491: INFO: Pod "pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.305436ms
Aug 19 00:59:48.520: INFO: Pod "pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052392384s
Aug 19 00:59:50.527: INFO: Pod "pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059844081s
STEP: Saw pod success
Aug 19 00:59:50.528: INFO: Pod "pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6" satisfied condition "success or failure"
Aug 19 00:59:50.537: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6 container secret-volume-test: 
STEP: delete the pod
Aug 19 00:59:50.561: INFO: Waiting for pod pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6 to disappear
Aug 19 00:59:50.565: INFO: Pod pod-secrets-770a40a0-ddd5-4608-a63c-22fbfb1040a6 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 00:59:50.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9270" for this suite.
Aug 19 00:59:56.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 00:59:56.838: INFO: namespace secrets-9270 deletion completed in 6.264542462s

• [SLOW TEST:10.513 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 00:59:56.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 00:59:57.045: INFO: Creating deployment "nginx-deployment"
Aug 19 00:59:57.081: INFO: Waiting for observed generation 1
Aug 19 00:59:59.367: INFO: Waiting for all required pods to come up
Aug 19 01:00:00.340: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 19 01:00:16.574: INFO: Waiting for deployment "nginx-deployment" to complete
Aug 19 01:00:16.586: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug 19 01:00:16.598: INFO: Updating deployment nginx-deployment
Aug 19 01:00:16.598: INFO: Waiting for observed generation 2
Aug 19 01:00:19.102: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 19 01:00:19.259: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 19 01:00:19.318: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 19 01:00:19.748: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 19 01:00:19.749: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 19 01:00:19.753: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 19 01:00:19.761: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug 19 01:00:19.761: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug 19 01:00:19.771: INFO: Updating deployment nginx-deployment
Aug 19 01:00:19.771: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug 19 01:00:20.313: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 19 01:00:20.708: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 01:00:23.457: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1303,SelfLink:/apis/apps/v1/namespaces/deployment-1303/deployments/nginx-deployment,UID:762ba60f-f016-4648-adba-cbd62c4d1fbc,ResourceVersion:941859,Generation:3,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-08-19 01:00:20 +0000 UTC 2020-08-19 01:00:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-19 01:00:20 +0000 UTC 2020-08-19 00:59:57 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug 19 01:00:23.510: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1303,SelfLink:/apis/apps/v1/namespaces/deployment-1303/replicasets/nginx-deployment-55fb7cb77f,UID:5c7ffb70-9926-48de-a7cf-411a9bf08d5a,ResourceVersion:941850,Generation:3,CreationTimestamp:2020-08-19 01:00:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 762ba60f-f016-4648-adba-cbd62c4d1fbc 0x4003178537 0x4003178538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 01:00:23.510: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug 19 01:00:23.511: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1303,SelfLink:/apis/apps/v1/namespaces/deployment-1303/replicasets/nginx-deployment-7b8c6f4498,UID:ae224c58-e0e1-4d50-9dcf-f2624a2c6e42,ResourceVersion:941856,Generation:3,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 762ba60f-f016-4648-adba-cbd62c4d1fbc 0x4003178607 0x4003178608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug 19 01:00:23.649: INFO: Pod "nginx-deployment-55fb7cb77f-54hg2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-54hg2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-54hg2,UID:463f4f5c-4f38-4304-bc2f-2e122bd1f68e,ResourceVersion:941840,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae40b7 0x4001ae40b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae4190} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae41c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.651: INFO: Pod "nginx-deployment-55fb7cb77f-5fvhs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5fvhs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-5fvhs,UID:064a9ab1-24e5-447b-9403-18f2928e1077,ResourceVersion:941858,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae4300 0x4001ae4301}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae43f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae4410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.652: INFO: Pod "nginx-deployment-55fb7cb77f-5kfqs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5kfqs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-5kfqs,UID:e620eea6-a42f-44fd-85b9-df5f90801c0b,ResourceVersion:941841,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae4670 0x4001ae4671}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae47c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae47f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.652: INFO: Pod "nginx-deployment-55fb7cb77f-8b2kq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8b2kq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-8b2kq,UID:9d27ba8d-c71c-4cd6-b6e6-3d7c75e66dfc,ResourceVersion:941836,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae48d0 0x4001ae48d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae49c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae49e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.654: INFO: Pod "nginx-deployment-55fb7cb77f-dffbw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dffbw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-dffbw,UID:2ac21f5e-d9aa-4786-805a-5a3b0a461538,ResourceVersion:941832,Generation:0,CreationTimestamp:2020-08-19 01:00:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae4ac0 0x4001ae4ac1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae4c00} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae4c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.68,StartTime:2020-08-19 01:00:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.655: INFO: Pod "nginx-deployment-55fb7cb77f-gfjvj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gfjvj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-gfjvj,UID:daf7173b-f690-4391-a0b2-7e8a97147bcb,ResourceVersion:941862,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae4df0 0x4001ae4df1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae4e70} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae4e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.656: INFO: Pod "nginx-deployment-55fb7cb77f-jwvjd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jwvjd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-jwvjd,UID:1c2e9866-aec6-40d0-b9b2-e8fb86ca0434,ResourceVersion:941822,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae4f60 0x4001ae4f61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae4fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae5000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.657: INFO: Pod "nginx-deployment-55fb7cb77f-k9vvw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k9vvw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-k9vvw,UID:237d375b-f6ef-4c96-a356-5725fb6ca0a7,ResourceVersion:941842,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae5080 0x4001ae5081}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae5100} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae5120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.658: INFO: Pod "nginx-deployment-55fb7cb77f-kx4qm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kx4qm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-kx4qm,UID:11ee3736-f28d-44e4-a522-1ee25c9676a5,ResourceVersion:941772,Generation:0,CreationTimestamp:2020-08-19 01:00:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae51a0 0x4001ae51a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae5220} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae5240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 01:00:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.660: INFO: Pod "nginx-deployment-55fb7cb77f-r62jr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r62jr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-r62jr,UID:b2b34fea-9899-4302-942e-1e5f678fbca9,ResourceVersion:941756,Generation:0,CreationTimestamp:2020-08-19 01:00:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae5310 0x4001ae5311}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae5390} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae53b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 01:00:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.661: INFO: Pod "nginx-deployment-55fb7cb77f-v2ll7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v2ll7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-v2ll7,UID:ff269b1d-6364-4ccb-9723-1dafb1a8a270,ResourceVersion:941847,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae5480 0x4001ae5481}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae5500} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae5620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.661: INFO: Pod "nginx-deployment-55fb7cb77f-w45cf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w45cf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-w45cf,UID:2c3d5dd3-f48d-408a-82d3-18a8f89cd160,ResourceVersion:941774,Generation:0,CreationTimestamp:2020-08-19 01:00:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae5720 0x4001ae5721}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae57f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae5810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 01:00:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.662: INFO: Pod "nginx-deployment-55fb7cb77f-wpjml" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wpjml,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-55fb7cb77f-wpjml,UID:86e2c9d2-7c04-42d8-81e4-0c19ad96d49b,ResourceVersion:941751,Generation:0,CreationTimestamp:2020-08-19 01:00:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 5c7ffb70-9926-48de-a7cf-411a9bf08d5a 0x4001ae5a60 0x4001ae5a61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae5ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae5b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 01:00:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.663: INFO: Pod "nginx-deployment-7b8c6f4498-2wzd8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2wzd8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-2wzd8,UID:0fcb39db-6b73-4544-9e0b-6a6fef1ec43e,ResourceVersion:941871,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4001ae5cf0 0x4001ae5cf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001ae5ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001ae5ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.664: INFO: Pod "nginx-deployment-7b8c6f4498-6gh7b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6gh7b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-6gh7b,UID:4b1aea0d-ac62-42ad-8e6f-35da40196e91,ResourceVersion:941893,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0020 0x4000ff0021}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0090} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff00b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.665: INFO: Pod "nginx-deployment-7b8c6f4498-6rqj4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6rqj4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-6rqj4,UID:cf953312-dc74-4711-b231-a1d52b084315,ResourceVersion:941712,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0170 0x4000ff0171}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff01e0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.66,StartTime:2020-08-19 00:59:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://71e25e9e0cfbfeabd9fd029036a6dcf52fc58361ac8c49b562cff80da4664b5e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.667: INFO: Pod "nginx-deployment-7b8c6f4498-8gch6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8gch6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-8gch6,UID:5b433f55-131d-4acb-acf3-88d5d9290c12,ResourceVersion:941884,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff02d0 0x4000ff02d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0340} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.668: INFO: Pod "nginx-deployment-7b8c6f4498-9bcjg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9bcjg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-9bcjg,UID:939602f7-e470-43f6-92d7-b289b3834f9c,ResourceVersion:941883,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0420 0x4000ff0421}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0490} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff04b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 01:00:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.668: INFO: Pod "nginx-deployment-7b8c6f4498-9dzbg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9dzbg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-9dzbg,UID:554df8a4-0820-4aca-9474-f43729333bb1,ResourceVersion:941835,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0580 0x4000ff0581}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff05f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.669: INFO: Pod "nginx-deployment-7b8c6f4498-b6c5q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b6c5q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-b6c5q,UID:188e784f-26e6-497a-a332-60801e0ab195,ResourceVersion:941679,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0690 0x4000ff0691}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0700} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.64,StartTime:2020-08-19 00:59:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://55c9c85227c667ea969092baa9a1c70a61995022cddbb929fd60ed933cee5a42}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.671: INFO: Pod "nginx-deployment-7b8c6f4498-b6wf6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b6wf6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-b6wf6,UID:22d2c3ff-1920-4cfc-8597-20ae2486674b,ResourceVersion:941892,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff07f0 0x4000ff07f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0860} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 01:00:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.672: INFO: Pod "nginx-deployment-7b8c6f4498-bhnkv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bhnkv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-bhnkv,UID:0c476986-222c-4c97-9c0b-aa500194d7a1,ResourceVersion:941696,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0940 0x4000ff0941}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff09b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff09d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.177,StartTime:2020-08-19 00:59:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dea713aecdcc25f1ed72214a03ea0bfe8e99471fbd9d82025feffbc19991a199}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.672: INFO: Pod "nginx-deployment-7b8c6f4498-cxlmr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cxlmr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-cxlmr,UID:45cc2435-5cb9-4592-8e1e-568265452406,ResourceVersion:941838,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0aa0 0x4000ff0aa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0b10} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.673: INFO: Pod "nginx-deployment-7b8c6f4498-d8zx7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d8zx7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-d8zx7,UID:510237ff-44db-470e-87d5-1c41134c702a,ResourceVersion:941670,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0bd0 0x4000ff0bd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0c40} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.63,StartTime:2020-08-19 00:59:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b4cc7a41963700d0201592e8ae10e60913a56226fc86cb501665d5285f2d1560}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.674: INFO: Pod "nginx-deployment-7b8c6f4498-dtd2g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dtd2g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-dtd2g,UID:bb548564-8734-40b7-b7a2-35bdaadda28e,ResourceVersion:941831,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0d30 0x4000ff0d31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0da0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff0df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.675: INFO: Pod "nginx-deployment-7b8c6f4498-jzp2q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jzp2q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-jzp2q,UID:b0e0c345-0d0b-4cc6-b567-c9b20161a1bb,ResourceVersion:941848,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff0f10 0x4000ff0f11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff0ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff11c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.676: INFO: Pod "nginx-deployment-7b8c6f4498-n4qbx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n4qbx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-n4qbx,UID:1d98777b-b912-4594-b92e-170f83e43938,ResourceVersion:941853,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff1360 0x4000ff1361}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff14d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff14f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.677: INFO: Pod "nginx-deployment-7b8c6f4498-nzq7k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nzq7k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-nzq7k,UID:9b66d5ff-68bb-4c95-b5f9-546319771670,ResourceVersion:941839,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff15e0 0x4000ff15e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff1650} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff1760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.678: INFO: Pod "nginx-deployment-7b8c6f4498-qgccj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qgccj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-qgccj,UID:c15059f7-ed67-44b1-a530-cc7ce9e7667b,ResourceVersion:941674,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff17e0 0x4000ff17e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff1850} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff1870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.173,StartTime:2020-08-19 00:59:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c266e8711830870dfe65fe0194b1df6eded2ef95a78bf500cd6d2bd41f7377bc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.679: INFO: Pod "nginx-deployment-7b8c6f4498-sbgjm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sbgjm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-sbgjm,UID:9780a46a-ce10-4a99-a462-af5af21c6277,ResourceVersion:941685,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff1a30 0x4000ff1a31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff1b40} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff1b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.174,StartTime:2020-08-19 00:59:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://62fd9796accd6c5713081015b429b5d2f30fb555053820f10dd3ac00cfbecada}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.680: INFO: Pod "nginx-deployment-7b8c6f4498-sc6sm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sc6sm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-sc6sm,UID:a0e1b191-d3e2-4fac-bcbf-8d9f084cdf56,ResourceVersion:941684,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x4000ff1d70 0x4000ff1d71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4000ff1e90} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000ff1f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.67,StartTime:2020-08-19 00:59:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2fd2fa92cf09995c9e76409726800c12d33510c028526c6b526e11aa300885ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.681: INFO: Pod "nginx-deployment-7b8c6f4498-wlfkk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wlfkk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-wlfkk,UID:56d1fc18-a99b-4b5f-8e1a-6cc145666de4,ResourceVersion:941873,Generation:0,CreationTimestamp:2020-08-19 01:00:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x40004ce8f0 0x40004ce8f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40004ceba0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40004cebe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-19 01:00:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 19 01:00:23.682: INFO: Pod "nginx-deployment-7b8c6f4498-x4w9z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x4w9z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1303,SelfLink:/api/v1/namespaces/deployment-1303/pods/nginx-deployment-7b8c6f4498-x4w9z,UID:e107e856-599e-4caf-a8a7-bfec68e1e45a,ResourceVersion:941701,Generation:0,CreationTimestamp:2020-08-19 00:59:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae224c58-e0e1-4d50-9dcf-f2624a2c6e42 0x40004cf2d0 0x40004cf2d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5nq2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5nq2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k5nq2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40004cf640} {node.kubernetes.io/unreachable Exists  NoExecute 0x40004cf660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:00:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 00:59:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.65,StartTime:2020-08-19 00:59:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-19 01:00:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://39311e7638a38ba588c2137fc1348fb6fd88d44b8ced8ffc02d320a6cbe12138}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:00:23.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1303" for this suite.
Aug 19 01:01:01.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:01:01.295: INFO: namespace deployment-1303 deletion completed in 37.496694309s

• [SLOW TEST:64.455 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:01:01.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 01:01:01.465: INFO: Waiting up to 5m0s for pod "downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be" in namespace "downward-api-2589" to be "success or failure"
Aug 19 01:01:01.471: INFO: Pod "downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be": Phase="Pending", Reason="", readiness=false. Elapsed: 5.87076ms
Aug 19 01:01:03.564: INFO: Pod "downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098570297s
Aug 19 01:01:05.571: INFO: Pod "downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104930405s
STEP: Saw pod success
Aug 19 01:01:05.571: INFO: Pod "downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be" satisfied condition "success or failure"
Aug 19 01:01:05.579: INFO: Trying to get logs from node iruya-worker2 pod downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be container dapi-container: 
STEP: delete the pod
Aug 19 01:01:05.641: INFO: Waiting for pod downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be to disappear
Aug 19 01:01:05.662: INFO: Pod downward-api-64e4f9a8-bc75-41b4-92a3-e83898abe6be no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:01:05.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2589" for this suite.
Aug 19 01:01:11.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:01:11.787: INFO: namespace downward-api-2589 deletion completed in 6.115403339s

• [SLOW TEST:10.489 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:01:11.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 19 01:01:28.021: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:28.021: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:28.085274       7 log.go:172] (0x4003384fd0) (0x40022134a0) Create stream
I0819 01:01:28.085505       7 log.go:172] (0x4003384fd0) (0x40022134a0) Stream added, broadcasting: 1
I0819 01:01:28.089770       7 log.go:172] (0x4003384fd0) Reply frame received for 1
I0819 01:01:28.089925       7 log.go:172] (0x4003384fd0) (0x40035f2fa0) Create stream
I0819 01:01:28.089999       7 log.go:172] (0x4003384fd0) (0x40035f2fa0) Stream added, broadcasting: 3
I0819 01:01:28.091412       7 log.go:172] (0x4003384fd0) Reply frame received for 3
I0819 01:01:28.091560       7 log.go:172] (0x4003384fd0) (0x4003004280) Create stream
I0819 01:01:28.091633       7 log.go:172] (0x4003384fd0) (0x4003004280) Stream added, broadcasting: 5
I0819 01:01:28.093228       7 log.go:172] (0x4003384fd0) Reply frame received for 5
I0819 01:01:28.168320       7 log.go:172] (0x4003384fd0) Data frame received for 5
I0819 01:01:28.168516       7 log.go:172] (0x4003004280) (5) Data frame handling
I0819 01:01:28.168663       7 log.go:172] (0x4003384fd0) Data frame received for 3
I0819 01:01:28.168866       7 log.go:172] (0x40035f2fa0) (3) Data frame handling
I0819 01:01:28.168988       7 log.go:172] (0x40035f2fa0) (3) Data frame sent
I0819 01:01:28.169101       7 log.go:172] (0x4003384fd0) Data frame received for 3
I0819 01:01:28.169238       7 log.go:172] (0x40035f2fa0) (3) Data frame handling
I0819 01:01:28.170508       7 log.go:172] (0x4003384fd0) Data frame received for 1
I0819 01:01:28.170708       7 log.go:172] (0x40022134a0) (1) Data frame handling
I0819 01:01:28.170941       7 log.go:172] (0x40022134a0) (1) Data frame sent
I0819 01:01:28.171059       7 log.go:172] (0x4003384fd0) (0x40022134a0) Stream removed, broadcasting: 1
I0819 01:01:28.171182       7 log.go:172] (0x4003384fd0) Go away received
I0819 01:01:28.171622       7 log.go:172] (0x4003384fd0) (0x40022134a0) Stream removed, broadcasting: 1
I0819 01:01:28.171758       7 log.go:172] (0x4003384fd0) (0x40035f2fa0) Stream removed, broadcasting: 3
I0819 01:01:28.171820       7 log.go:172] (0x4003384fd0) (0x4003004280) Stream removed, broadcasting: 5
Aug 19 01:01:28.171: INFO: Exec stderr: ""
Aug 19 01:01:28.172: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:28.172: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:28.290946       7 log.go:172] (0x40011ef1e0) (0x4003998460) Create stream
I0819 01:01:28.291150       7 log.go:172] (0x40011ef1e0) (0x4003998460) Stream added, broadcasting: 1
I0819 01:01:28.297522       7 log.go:172] (0x40011ef1e0) Reply frame received for 1
I0819 01:01:28.297752       7 log.go:172] (0x40011ef1e0) (0x40035f3040) Create stream
I0819 01:01:28.297862       7 log.go:172] (0x40011ef1e0) (0x40035f3040) Stream added, broadcasting: 3
I0819 01:01:28.299695       7 log.go:172] (0x40011ef1e0) Reply frame received for 3
I0819 01:01:28.299884       7 log.go:172] (0x40011ef1e0) (0x4002213540) Create stream
I0819 01:01:28.299977       7 log.go:172] (0x40011ef1e0) (0x4002213540) Stream added, broadcasting: 5
I0819 01:01:28.301462       7 log.go:172] (0x40011ef1e0) Reply frame received for 5
I0819 01:01:28.352703       7 log.go:172] (0x40011ef1e0) Data frame received for 5
I0819 01:01:28.353014       7 log.go:172] (0x40011ef1e0) Data frame received for 3
I0819 01:01:28.353251       7 log.go:172] (0x40035f3040) (3) Data frame handling
I0819 01:01:28.353552       7 log.go:172] (0x40035f3040) (3) Data frame sent
I0819 01:01:28.353768       7 log.go:172] (0x40011ef1e0) Data frame received for 3
I0819 01:01:28.354159       7 log.go:172] (0x40011ef1e0) Data frame received for 1
I0819 01:01:28.354305       7 log.go:172] (0x4003998460) (1) Data frame handling
I0819 01:01:28.354453       7 log.go:172] (0x4002213540) (5) Data frame handling
I0819 01:01:28.354685       7 log.go:172] (0x40035f3040) (3) Data frame handling
I0819 01:01:28.354902       7 log.go:172] (0x4003998460) (1) Data frame sent
I0819 01:01:28.355067       7 log.go:172] (0x40011ef1e0) (0x4003998460) Stream removed, broadcasting: 1
I0819 01:01:28.355209       7 log.go:172] (0x40011ef1e0) Go away received
I0819 01:01:28.355522       7 log.go:172] (0x40011ef1e0) (0x4003998460) Stream removed, broadcasting: 1
I0819 01:01:28.355610       7 log.go:172] (0x40011ef1e0) (0x40035f3040) Stream removed, broadcasting: 3
I0819 01:01:28.355677       7 log.go:172] (0x40011ef1e0) (0x4002213540) Stream removed, broadcasting: 5
Aug 19 01:01:28.355: INFO: Exec stderr: ""
Aug 19 01:01:28.356: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:28.356: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:28.418096       7 log.go:172] (0x4003870f20) (0x4001c83b80) Create stream
I0819 01:01:28.418273       7 log.go:172] (0x4003870f20) (0x4001c83b80) Stream added, broadcasting: 1
I0819 01:01:28.423817       7 log.go:172] (0x4003870f20) Reply frame received for 1
I0819 01:01:28.424062       7 log.go:172] (0x4003870f20) (0x4003004320) Create stream
I0819 01:01:28.424161       7 log.go:172] (0x4003870f20) (0x4003004320) Stream added, broadcasting: 3
I0819 01:01:28.426179       7 log.go:172] (0x4003870f20) Reply frame received for 3
I0819 01:01:28.426397       7 log.go:172] (0x4003870f20) (0x40030043c0) Create stream
I0819 01:01:28.426507       7 log.go:172] (0x4003870f20) (0x40030043c0) Stream added, broadcasting: 5
I0819 01:01:28.428372       7 log.go:172] (0x4003870f20) Reply frame received for 5
I0819 01:01:28.514998       7 log.go:172] (0x4003870f20) Data frame received for 3
I0819 01:01:28.515202       7 log.go:172] (0x4003004320) (3) Data frame handling
I0819 01:01:28.515370       7 log.go:172] (0x4003870f20) Data frame received for 5
I0819 01:01:28.515642       7 log.go:172] (0x40030043c0) (5) Data frame handling
I0819 01:01:28.515916       7 log.go:172] (0x4003004320) (3) Data frame sent
I0819 01:01:28.516088       7 log.go:172] (0x4003870f20) Data frame received for 3
I0819 01:01:28.516353       7 log.go:172] (0x4003004320) (3) Data frame handling
I0819 01:01:28.516544       7 log.go:172] (0x4003870f20) Data frame received for 1
I0819 01:01:28.516646       7 log.go:172] (0x4001c83b80) (1) Data frame handling
I0819 01:01:28.516880       7 log.go:172] (0x4001c83b80) (1) Data frame sent
I0819 01:01:28.517020       7 log.go:172] (0x4003870f20) (0x4001c83b80) Stream removed, broadcasting: 1
I0819 01:01:28.517152       7 log.go:172] (0x4003870f20) Go away received
I0819 01:01:28.517611       7 log.go:172] (0x4003870f20) (0x4001c83b80) Stream removed, broadcasting: 1
I0819 01:01:28.517772       7 log.go:172] (0x4003870f20) (0x4003004320) Stream removed, broadcasting: 3
I0819 01:01:28.517905       7 log.go:172] (0x4003870f20) (0x40030043c0) Stream removed, broadcasting: 5
Aug 19 01:01:28.517: INFO: Exec stderr: ""
Aug 19 01:01:28.518: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:28.518: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:28.579835       7 log.go:172] (0x40032264d0) (0x4002213860) Create stream
I0819 01:01:28.580031       7 log.go:172] (0x40032264d0) (0x4002213860) Stream added, broadcasting: 1
I0819 01:01:28.585221       7 log.go:172] (0x40032264d0) Reply frame received for 1
I0819 01:01:28.585414       7 log.go:172] (0x40032264d0) (0x4002213900) Create stream
I0819 01:01:28.585526       7 log.go:172] (0x40032264d0) (0x4002213900) Stream added, broadcasting: 3
I0819 01:01:28.586864       7 log.go:172] (0x40032264d0) Reply frame received for 3
I0819 01:01:28.586995       7 log.go:172] (0x40032264d0) (0x4003015540) Create stream
I0819 01:01:28.587065       7 log.go:172] (0x40032264d0) (0x4003015540) Stream added, broadcasting: 5
I0819 01:01:28.588280       7 log.go:172] (0x40032264d0) Reply frame received for 5
I0819 01:01:28.656360       7 log.go:172] (0x40032264d0) Data frame received for 5
I0819 01:01:28.656519       7 log.go:172] (0x4003015540) (5) Data frame handling
I0819 01:01:28.656650       7 log.go:172] (0x40032264d0) Data frame received for 3
I0819 01:01:28.656871       7 log.go:172] (0x4002213900) (3) Data frame handling
I0819 01:01:28.657017       7 log.go:172] (0x4002213900) (3) Data frame sent
I0819 01:01:28.657134       7 log.go:172] (0x40032264d0) Data frame received for 3
I0819 01:01:28.657240       7 log.go:172] (0x4002213900) (3) Data frame handling
I0819 01:01:28.657494       7 log.go:172] (0x40032264d0) Data frame received for 1
I0819 01:01:28.657790       7 log.go:172] (0x4002213860) (1) Data frame handling
I0819 01:01:28.657882       7 log.go:172] (0x4002213860) (1) Data frame sent
I0819 01:01:28.657969       7 log.go:172] (0x40032264d0) (0x4002213860) Stream removed, broadcasting: 1
I0819 01:01:28.658083       7 log.go:172] (0x40032264d0) Go away received
I0819 01:01:28.658554       7 log.go:172] (0x40032264d0) (0x4002213860) Stream removed, broadcasting: 1
I0819 01:01:28.658700       7 log.go:172] (0x40032264d0) (0x4002213900) Stream removed, broadcasting: 3
I0819 01:01:28.658769       7 log.go:172] (0x40032264d0) (0x4003015540) Stream removed, broadcasting: 5
Aug 19 01:01:28.658: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 19 01:01:28.659: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:28.659: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:28.710538       7 log.go:172] (0x4003226fd0) (0x4002213c20) Create stream
I0819 01:01:28.710811       7 log.go:172] (0x4003226fd0) (0x4002213c20) Stream added, broadcasting: 1
I0819 01:01:28.714890       7 log.go:172] (0x4003226fd0) Reply frame received for 1
I0819 01:01:28.715136       7 log.go:172] (0x4003226fd0) (0x40030155e0) Create stream
I0819 01:01:28.715244       7 log.go:172] (0x4003226fd0) (0x40030155e0) Stream added, broadcasting: 3
I0819 01:01:28.717120       7 log.go:172] (0x4003226fd0) Reply frame received for 3
I0819 01:01:28.717266       7 log.go:172] (0x4003226fd0) (0x4002213cc0) Create stream
I0819 01:01:28.717363       7 log.go:172] (0x4003226fd0) (0x4002213cc0) Stream added, broadcasting: 5
I0819 01:01:28.719119       7 log.go:172] (0x4003226fd0) Reply frame received for 5
I0819 01:01:28.786721       7 log.go:172] (0x4003226fd0) Data frame received for 5
I0819 01:01:28.786849       7 log.go:172] (0x4002213cc0) (5) Data frame handling
I0819 01:01:28.787040       7 log.go:172] (0x4003226fd0) Data frame received for 3
I0819 01:01:28.787199       7 log.go:172] (0x40030155e0) (3) Data frame handling
I0819 01:01:28.787344       7 log.go:172] (0x40030155e0) (3) Data frame sent
I0819 01:01:28.787436       7 log.go:172] (0x4003226fd0) Data frame received for 3
I0819 01:01:28.787516       7 log.go:172] (0x40030155e0) (3) Data frame handling
I0819 01:01:28.788307       7 log.go:172] (0x4003226fd0) Data frame received for 1
I0819 01:01:28.788437       7 log.go:172] (0x4002213c20) (1) Data frame handling
I0819 01:01:28.788563       7 log.go:172] (0x4002213c20) (1) Data frame sent
I0819 01:01:28.788681       7 log.go:172] (0x4003226fd0) (0x4002213c20) Stream removed, broadcasting: 1
I0819 01:01:28.788923       7 log.go:172] (0x4003226fd0) Go away received
I0819 01:01:28.789221       7 log.go:172] (0x4003226fd0) (0x4002213c20) Stream removed, broadcasting: 1
I0819 01:01:28.789297       7 log.go:172] (0x4003226fd0) (0x40030155e0) Stream removed, broadcasting: 3
I0819 01:01:28.789355       7 log.go:172] (0x4003226fd0) (0x4002213cc0) Stream removed, broadcasting: 5
Aug 19 01:01:28.789: INFO: Exec stderr: ""
Aug 19 01:01:28.789: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:28.789: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:28.858634       7 log.go:172] (0x40011efce0) (0x4003998780) Create stream
I0819 01:01:28.858800       7 log.go:172] (0x40011efce0) (0x4003998780) Stream added, broadcasting: 1
I0819 01:01:28.863129       7 log.go:172] (0x40011efce0) Reply frame received for 1
I0819 01:01:28.863324       7 log.go:172] (0x40011efce0) (0x4003015680) Create stream
I0819 01:01:28.863424       7 log.go:172] (0x40011efce0) (0x4003015680) Stream added, broadcasting: 3
I0819 01:01:28.865267       7 log.go:172] (0x40011efce0) Reply frame received for 3
I0819 01:01:28.865539       7 log.go:172] (0x40011efce0) (0x4001c83c20) Create stream
I0819 01:01:28.865641       7 log.go:172] (0x40011efce0) (0x4001c83c20) Stream added, broadcasting: 5
I0819 01:01:28.867189       7 log.go:172] (0x40011efce0) Reply frame received for 5
I0819 01:01:28.934004       7 log.go:172] (0x40011efce0) Data frame received for 3
I0819 01:01:28.934145       7 log.go:172] (0x4003015680) (3) Data frame handling
I0819 01:01:28.934252       7 log.go:172] (0x40011efce0) Data frame received for 5
I0819 01:01:28.934380       7 log.go:172] (0x4001c83c20) (5) Data frame handling
I0819 01:01:28.934564       7 log.go:172] (0x4003015680) (3) Data frame sent
I0819 01:01:28.934761       7 log.go:172] (0x40011efce0) Data frame received for 3
I0819 01:01:28.934832       7 log.go:172] (0x4003015680) (3) Data frame handling
I0819 01:01:28.935112       7 log.go:172] (0x40011efce0) Data frame received for 1
I0819 01:01:28.935234       7 log.go:172] (0x4003998780) (1) Data frame handling
I0819 01:01:28.935357       7 log.go:172] (0x4003998780) (1) Data frame sent
I0819 01:01:28.935471       7 log.go:172] (0x40011efce0) (0x4003998780) Stream removed, broadcasting: 1
I0819 01:01:28.935632       7 log.go:172] (0x40011efce0) Go away received
I0819 01:01:28.936366       7 log.go:172] (0x40011efce0) (0x4003998780) Stream removed, broadcasting: 1
I0819 01:01:28.936478       7 log.go:172] (0x40011efce0) (0x4003015680) Stream removed, broadcasting: 3
I0819 01:01:28.936582       7 log.go:172] (0x40011efce0) (0x4001c83c20) Stream removed, broadcasting: 5
Aug 19 01:01:28.936: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 19 01:01:28.937: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:28.937: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:29.086234       7 log.go:172] (0x40032784d0) (0x4002a740a0) Create stream
I0819 01:01:29.086426       7 log.go:172] (0x40032784d0) (0x4002a740a0) Stream added, broadcasting: 1
I0819 01:01:29.095832       7 log.go:172] (0x40032784d0) Reply frame received for 1
I0819 01:01:29.096145       7 log.go:172] (0x40032784d0) (0x4002a74140) Create stream
I0819 01:01:29.096273       7 log.go:172] (0x40032784d0) (0x4002a74140) Stream added, broadcasting: 3
I0819 01:01:29.099068       7 log.go:172] (0x40032784d0) Reply frame received for 3
I0819 01:01:29.099203       7 log.go:172] (0x40032784d0) (0x4002a74280) Create stream
I0819 01:01:29.099274       7 log.go:172] (0x40032784d0) (0x4002a74280) Stream added, broadcasting: 5
I0819 01:01:29.100667       7 log.go:172] (0x40032784d0) Reply frame received for 5
I0819 01:01:29.164226       7 log.go:172] (0x40032784d0) Data frame received for 5
I0819 01:01:29.164342       7 log.go:172] (0x4002a74280) (5) Data frame handling
I0819 01:01:29.164441       7 log.go:172] (0x40032784d0) Data frame received for 3
I0819 01:01:29.164526       7 log.go:172] (0x4002a74140) (3) Data frame handling
I0819 01:01:29.164614       7 log.go:172] (0x4002a74140) (3) Data frame sent
I0819 01:01:29.164682       7 log.go:172] (0x40032784d0) Data frame received for 3
I0819 01:01:29.164879       7 log.go:172] (0x4002a74140) (3) Data frame handling
I0819 01:01:29.165603       7 log.go:172] (0x40032784d0) Data frame received for 1
I0819 01:01:29.165691       7 log.go:172] (0x4002a740a0) (1) Data frame handling
I0819 01:01:29.165780       7 log.go:172] (0x4002a740a0) (1) Data frame sent
I0819 01:01:29.165872       7 log.go:172] (0x40032784d0) (0x4002a740a0) Stream removed, broadcasting: 1
I0819 01:01:29.165971       7 log.go:172] (0x40032784d0) Go away received
I0819 01:01:29.166150       7 log.go:172] (0x40032784d0) (0x4002a740a0) Stream removed, broadcasting: 1
I0819 01:01:29.166242       7 log.go:172] (0x40032784d0) (0x4002a74140) Stream removed, broadcasting: 3
I0819 01:01:29.166347       7 log.go:172] (0x40032784d0) (0x4002a74280) Stream removed, broadcasting: 5
Aug 19 01:01:29.166: INFO: Exec stderr: ""
Aug 19 01:01:29.166: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:29.166: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:29.217205       7 log.go:172] (0x4003279130) (0x4002a745a0) Create stream
I0819 01:01:29.217394       7 log.go:172] (0x4003279130) (0x4002a745a0) Stream added, broadcasting: 1
I0819 01:01:29.224253       7 log.go:172] (0x4003279130) Reply frame received for 1
I0819 01:01:29.224445       7 log.go:172] (0x4003279130) (0x4003998820) Create stream
I0819 01:01:29.224544       7 log.go:172] (0x4003279130) (0x4003998820) Stream added, broadcasting: 3
I0819 01:01:29.226253       7 log.go:172] (0x4003279130) Reply frame received for 3
I0819 01:01:29.226391       7 log.go:172] (0x4003279130) (0x40039988c0) Create stream
I0819 01:01:29.226458       7 log.go:172] (0x4003279130) (0x40039988c0) Stream added, broadcasting: 5
I0819 01:01:29.227740       7 log.go:172] (0x4003279130) Reply frame received for 5
I0819 01:01:29.296111       7 log.go:172] (0x4003279130) Data frame received for 5
I0819 01:01:29.296344       7 log.go:172] (0x40039988c0) (5) Data frame handling
I0819 01:01:29.296513       7 log.go:172] (0x4003279130) Data frame received for 3
I0819 01:01:29.296634       7 log.go:172] (0x4003998820) (3) Data frame handling
I0819 01:01:29.296844       7 log.go:172] (0x4003998820) (3) Data frame sent
I0819 01:01:29.297003       7 log.go:172] (0x4003279130) Data frame received for 3
I0819 01:01:29.297116       7 log.go:172] (0x4003998820) (3) Data frame handling
I0819 01:01:29.298119       7 log.go:172] (0x4003279130) Data frame received for 1
I0819 01:01:29.298255       7 log.go:172] (0x4002a745a0) (1) Data frame handling
I0819 01:01:29.298380       7 log.go:172] (0x4002a745a0) (1) Data frame sent
I0819 01:01:29.298523       7 log.go:172] (0x4003279130) (0x4002a745a0) Stream removed, broadcasting: 1
I0819 01:01:29.298774       7 log.go:172] (0x4003279130) Go away received
I0819 01:01:29.299126       7 log.go:172] (0x4003279130) (0x4002a745a0) Stream removed, broadcasting: 1
I0819 01:01:29.299261       7 log.go:172] (0x4003279130) (0x4003998820) Stream removed, broadcasting: 3
I0819 01:01:29.299402       7 log.go:172] (0x4003279130) (0x40039988c0) Stream removed, broadcasting: 5
Aug 19 01:01:29.299: INFO: Exec stderr: ""
Aug 19 01:01:29.299: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:29.299: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:29.647642       7 log.go:172] (0x4003279a20) (0x4002a748c0) Create stream
I0819 01:01:29.647798       7 log.go:172] (0x4003279a20) (0x4002a748c0) Stream added, broadcasting: 1
I0819 01:01:29.651447       7 log.go:172] (0x4003279a20) Reply frame received for 1
I0819 01:01:29.651565       7 log.go:172] (0x4003279a20) (0x4003004460) Create stream
I0819 01:01:29.651624       7 log.go:172] (0x4003279a20) (0x4003004460) Stream added, broadcasting: 3
I0819 01:01:29.653130       7 log.go:172] (0x4003279a20) Reply frame received for 3
I0819 01:01:29.653288       7 log.go:172] (0x4003279a20) (0x4002a74960) Create stream
I0819 01:01:29.653413       7 log.go:172] (0x4003279a20) (0x4002a74960) Stream added, broadcasting: 5
I0819 01:01:29.654699       7 log.go:172] (0x4003279a20) Reply frame received for 5
I0819 01:01:29.738591       7 log.go:172] (0x4003279a20) Data frame received for 3
I0819 01:01:29.738740       7 log.go:172] (0x4003004460) (3) Data frame handling
I0819 01:01:29.738834       7 log.go:172] (0x4003279a20) Data frame received for 5
I0819 01:01:29.738923       7 log.go:172] (0x4002a74960) (5) Data frame handling
I0819 01:01:29.738998       7 log.go:172] (0x4003004460) (3) Data frame sent
I0819 01:01:29.739079       7 log.go:172] (0x4003279a20) Data frame received for 3
I0819 01:01:29.739135       7 log.go:172] (0x4003004460) (3) Data frame handling
I0819 01:01:29.739642       7 log.go:172] (0x4003279a20) Data frame received for 1
I0819 01:01:29.739735       7 log.go:172] (0x4002a748c0) (1) Data frame handling
I0819 01:01:29.739826       7 log.go:172] (0x4002a748c0) (1) Data frame sent
I0819 01:01:29.739914       7 log.go:172] (0x4003279a20) (0x4002a748c0) Stream removed, broadcasting: 1
I0819 01:01:29.740016       7 log.go:172] (0x4003279a20) Go away received
I0819 01:01:29.740276       7 log.go:172] (0x4003279a20) (0x4002a748c0) Stream removed, broadcasting: 1
I0819 01:01:29.740360       7 log.go:172] (0x4003279a20) (0x4003004460) Stream removed, broadcasting: 3
I0819 01:01:29.740432       7 log.go:172] (0x4003279a20) (0x4002a74960) Stream removed, broadcasting: 5
Aug 19 01:01:29.740: INFO: Exec stderr: ""
Aug 19 01:01:29.740: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2534 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 19 01:01:29.740: INFO: >>> kubeConfig: /root/.kube/config
I0819 01:01:29.859952       7 log.go:172] (0x4002e4a4d0) (0x40035f3360) Create stream
I0819 01:01:29.860116       7 log.go:172] (0x4002e4a4d0) (0x40035f3360) Stream added, broadcasting: 1
I0819 01:01:29.864898       7 log.go:172] (0x4002e4a4d0) Reply frame received for 1
I0819 01:01:29.865242       7 log.go:172] (0x4002e4a4d0) (0x4001c83e00) Create stream
I0819 01:01:29.865326       7 log.go:172] (0x4002e4a4d0) (0x4001c83e00) Stream added, broadcasting: 3
I0819 01:01:29.866924       7 log.go:172] (0x4002e4a4d0) Reply frame received for 3
I0819 01:01:29.867098       7 log.go:172] (0x4002e4a4d0) (0x40035f3400) Create stream
I0819 01:01:29.867180       7 log.go:172] (0x4002e4a4d0) (0x40035f3400) Stream added, broadcasting: 5
I0819 01:01:29.868642       7 log.go:172] (0x4002e4a4d0) Reply frame received for 5
I0819 01:01:29.927031       7 log.go:172] (0x4002e4a4d0) Data frame received for 3
I0819 01:01:29.927186       7 log.go:172] (0x4001c83e00) (3) Data frame handling
I0819 01:01:29.927289       7 log.go:172] (0x4002e4a4d0) Data frame received for 5
I0819 01:01:29.927377       7 log.go:172] (0x40035f3400) (5) Data frame handling
I0819 01:01:29.927550       7 log.go:172] (0x4001c83e00) (3) Data frame sent
I0819 01:01:29.927689       7 log.go:172] (0x4002e4a4d0) Data frame received for 3
I0819 01:01:29.927779       7 log.go:172] (0x4001c83e00) (3) Data frame handling
I0819 01:01:29.929609       7 log.go:172] (0x4002e4a4d0) Data frame received for 1
I0819 01:01:29.929688       7 log.go:172] (0x40035f3360) (1) Data frame handling
I0819 01:01:29.929957       7 log.go:172] (0x40035f3360) (1) Data frame sent
I0819 01:01:29.930186       7 log.go:172] (0x4002e4a4d0) (0x40035f3360) Stream removed, broadcasting: 1
I0819 01:01:29.930411       7 log.go:172] (0x4002e4a4d0) Go away received
I0819 01:01:29.930831       7 log.go:172] (0x4002e4a4d0) (0x40035f3360) Stream removed, broadcasting: 1
I0819 01:01:29.930970       7 log.go:172] (0x4002e4a4d0) (0x4001c83e00) Stream removed, broadcasting: 3
I0819 01:01:29.931076       7 log.go:172] (0x4002e4a4d0) (0x40035f3400) Stream removed, broadcasting: 5
Aug 19 01:01:29.931: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:01:29.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-2534" for this suite.
Aug 19 01:02:15.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:02:16.108: INFO: namespace e2e-kubelet-etc-hosts-2534 deletion completed in 46.146128782s

• [SLOW TEST:64.320 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:02:16.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Aug 19 01:02:16.264: INFO: Waiting up to 5m0s for pod "client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e" in namespace "containers-689" to be "success or failure"
Aug 19 01:02:16.312: INFO: Pod "client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.861815ms
Aug 19 01:02:18.319: INFO: Pod "client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055267783s
Aug 19 01:02:20.352: INFO: Pod "client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088145179s
Aug 19 01:02:22.359: INFO: Pod "client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095163882s
STEP: Saw pod success
Aug 19 01:02:22.359: INFO: Pod "client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e" satisfied condition "success or failure"
Aug 19 01:02:22.364: INFO: Trying to get logs from node iruya-worker pod client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e container test-container: 
STEP: delete the pod
Aug 19 01:02:22.405: INFO: Waiting for pod client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e to disappear
Aug 19 01:02:22.417: INFO: Pod client-containers-b2293541-b3b2-4bcd-ba54-0b7b89472d1e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:02:22.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-689" for this suite.
Aug 19 01:02:28.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:02:28.630: INFO: namespace containers-689 deletion completed in 6.200532233s

• [SLOW TEST:12.519 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:02:28.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:02:28.772: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 19 01:02:28.794: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:28.826: INFO: Number of nodes with available pods: 0
Aug 19 01:02:28.826: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:02:29.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:29.843: INFO: Number of nodes with available pods: 0
Aug 19 01:02:29.843: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:02:30.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:30.843: INFO: Number of nodes with available pods: 0
Aug 19 01:02:30.843: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:02:31.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:31.843: INFO: Number of nodes with available pods: 0
Aug 19 01:02:31.843: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:02:32.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:32.845: INFO: Number of nodes with available pods: 1
Aug 19 01:02:32.845: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 01:02:33.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:33.845: INFO: Number of nodes with available pods: 2
Aug 19 01:02:33.845: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 19 01:02:33.923: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:33.924: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:33.956: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:34.964: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:34.965: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:34.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:35.962: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:35.962: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:35.963: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:35.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:36.965: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:36.965: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:36.965: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:36.975: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:37.963: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:37.963: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:37.963: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:37.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:38.964: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:38.965: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:38.965: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:38.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:39.963: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:39.964: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:39.964: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:39.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:40.961: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:40.962: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:40.962: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:40.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:41.963: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:41.963: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:41.963: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:41.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:42.964: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:42.964: INFO: Wrong image for pod: daemon-set-p7qk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:42.964: INFO: Pod daemon-set-p7qk4 is not available
Aug 19 01:02:42.975: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:43.963: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:43.964: INFO: Pod daemon-set-np2qs is not available
Aug 19 01:02:43.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:45.104: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:45.104: INFO: Pod daemon-set-np2qs is not available
Aug 19 01:02:45.113: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:46.008: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:46.008: INFO: Pod daemon-set-np2qs is not available
Aug 19 01:02:46.018: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:46.963: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:46.964: INFO: Pod daemon-set-np2qs is not available
Aug 19 01:02:46.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:47.963: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:47.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:48.962: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:48.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:49.973: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:49.973: INFO: Pod daemon-set-4c6rh is not available
Aug 19 01:02:49.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:50.964: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:50.964: INFO: Pod daemon-set-4c6rh is not available
Aug 19 01:02:50.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:51.964: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:51.964: INFO: Pod daemon-set-4c6rh is not available
Aug 19 01:02:51.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:52.963: INFO: Wrong image for pod: daemon-set-4c6rh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 19 01:02:52.963: INFO: Pod daemon-set-4c6rh is not available
Aug 19 01:02:52.971: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:53.964: INFO: Pod daemon-set-7m86t is not available
Aug 19 01:02:53.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 19 01:02:53.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:53.990: INFO: Number of nodes with available pods: 1
Aug 19 01:02:53.990: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 01:02:55.001: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:55.008: INFO: Number of nodes with available pods: 1
Aug 19 01:02:55.008: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 01:02:56.042: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:56.047: INFO: Number of nodes with available pods: 1
Aug 19 01:02:56.047: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 01:02:57.004: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:02:57.010: INFO: Number of nodes with available pods: 2
Aug 19 01:02:57.010: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4814, will wait for the garbage collector to delete the pods
Aug 19 01:02:57.095: INFO: Deleting DaemonSet.extensions daemon-set took: 6.487508ms
Aug 19 01:02:57.396: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.751829ms
Aug 19 01:03:03.701: INFO: Number of nodes with available pods: 0
Aug 19 01:03:03.701: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 01:03:03.705: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4814/daemonsets","resourceVersion":"942706"},"items":null}

Aug 19 01:03:03.709: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4814/pods","resourceVersion":"942706"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:03:03.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4814" for this suite.
Aug 19 01:03:09.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:03:09.884: INFO: namespace daemonsets-4814 deletion completed in 6.156539312s

• [SLOW TEST:41.250 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:03:09.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 19 01:03:09.992: INFO: Waiting up to 5m0s for pod "pod-489658ac-df45-432b-973d-838bf8f9d332" in namespace "emptydir-3215" to be "success or failure"
Aug 19 01:03:10.008: INFO: Pod "pod-489658ac-df45-432b-973d-838bf8f9d332": Phase="Pending", Reason="", readiness=false. Elapsed: 15.149206ms
Aug 19 01:03:12.014: INFO: Pod "pod-489658ac-df45-432b-973d-838bf8f9d332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021138569s
Aug 19 01:03:14.025: INFO: Pod "pod-489658ac-df45-432b-973d-838bf8f9d332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032793903s
STEP: Saw pod success
Aug 19 01:03:14.025: INFO: Pod "pod-489658ac-df45-432b-973d-838bf8f9d332" satisfied condition "success or failure"
Aug 19 01:03:14.029: INFO: Trying to get logs from node iruya-worker pod pod-489658ac-df45-432b-973d-838bf8f9d332 container test-container: 
STEP: delete the pod
Aug 19 01:03:14.080: INFO: Waiting for pod pod-489658ac-df45-432b-973d-838bf8f9d332 to disappear
Aug 19 01:03:14.504: INFO: Pod pod-489658ac-df45-432b-973d-838bf8f9d332 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:03:14.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3215" for this suite.
Aug 19 01:03:20.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:03:20.943: INFO: namespace emptydir-3215 deletion completed in 6.392047974s

• [SLOW TEST:11.057 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:03:20.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Aug 19 01:03:21.386: INFO: Waiting up to 5m0s for pod "client-containers-51917071-35b3-45dc-9714-a6c0edc999be" in namespace "containers-7385" to be "success or failure"
Aug 19 01:03:21.424: INFO: Pod "client-containers-51917071-35b3-45dc-9714-a6c0edc999be": Phase="Pending", Reason="", readiness=false. Elapsed: 38.190304ms
Aug 19 01:03:23.437: INFO: Pod "client-containers-51917071-35b3-45dc-9714-a6c0edc999be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051182012s
Aug 19 01:03:25.463: INFO: Pod "client-containers-51917071-35b3-45dc-9714-a6c0edc999be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077070593s
STEP: Saw pod success
Aug 19 01:03:25.463: INFO: Pod "client-containers-51917071-35b3-45dc-9714-a6c0edc999be" satisfied condition "success or failure"
Aug 19 01:03:25.469: INFO: Trying to get logs from node iruya-worker pod client-containers-51917071-35b3-45dc-9714-a6c0edc999be container test-container: 
STEP: delete the pod
Aug 19 01:03:25.512: INFO: Waiting for pod client-containers-51917071-35b3-45dc-9714-a6c0edc999be to disappear
Aug 19 01:03:25.520: INFO: Pod client-containers-51917071-35b3-45dc-9714-a6c0edc999be no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:03:25.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7385" for this suite.
Aug 19 01:03:31.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:03:31.687: INFO: namespace containers-7385 deletion completed in 6.160081155s

• [SLOW TEST:10.742 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:03:31.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9772/configmap-test-4f7ccbaa-049a-411f-a426-10f384534450
STEP: Creating a pod to test consume configMaps
Aug 19 01:03:31.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52" in namespace "configmap-9772" to be "success or failure"
Aug 19 01:03:31.793: INFO: Pod "pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52": Phase="Pending", Reason="", readiness=false. Elapsed: 27.920761ms
Aug 19 01:03:33.841: INFO: Pod "pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075865151s
Aug 19 01:03:35.847: INFO: Pod "pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081914733s
STEP: Saw pod success
Aug 19 01:03:35.847: INFO: Pod "pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52" satisfied condition "success or failure"
Aug 19 01:03:35.853: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52 container env-test: 
STEP: delete the pod
Aug 19 01:03:35.888: INFO: Waiting for pod pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52 to disappear
Aug 19 01:03:35.892: INFO: Pod pod-configmaps-6c9bd3b2-2f8e-444c-bace-48d2779d7e52 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:03:35.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9772" for this suite.
Aug 19 01:03:41.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:03:42.090: INFO: namespace configmap-9772 deletion completed in 6.190443836s

• [SLOW TEST:10.401 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:03:42.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 19 01:03:52.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:03:52.284: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:03:54.284: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:03:54.292: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:03:56.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:03:56.291: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:03:58.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:03:58.292: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:04:00.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:04:00.293: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:04:02.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:04:02.292: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:04:04.284: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:04:04.291: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:04:06.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:04:06.292: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:04:08.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:04:08.292: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 19 01:04:10.285: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 19 01:04:10.291: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:04:10.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5317" for this suite.
Aug 19 01:04:32.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:04:32.460: INFO: namespace container-lifecycle-hook-5317 deletion completed in 22.150380527s

• [SLOW TEST:50.365 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:04:32.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8926
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 19 01:04:32.594: INFO: Found 0 stateful pods, waiting for 3
Aug 19 01:04:42.652: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 01:04:42.652: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 01:04:42.653: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 19 01:04:52.951: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 01:04:52.952: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 01:04:52.952: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 19 01:05:02.604: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 01:05:02.604: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 01:05:02.604: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 19 01:05:02.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8926 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 01:05:09.163: INFO: stderr: "I0819 01:05:08.575481    2352 log.go:172] (0x4000136dc0) (0x400062e960) Create stream\nI0819 01:05:08.579805    2352 log.go:172] (0x4000136dc0) (0x400062e960) Stream added, broadcasting: 1\nI0819 01:05:08.595133    2352 log.go:172] (0x4000136dc0) Reply frame received for 1\nI0819 01:05:08.596563    2352 log.go:172] (0x4000136dc0) (0x40007740a0) Create stream\nI0819 01:05:08.596792    2352 log.go:172] (0x4000136dc0) (0x40007740a0) Stream added, broadcasting: 3\nI0819 01:05:08.599287    2352 log.go:172] (0x4000136dc0) Reply frame received for 3\nI0819 01:05:08.599624    2352 log.go:172] (0x4000136dc0) (0x4000774140) Create stream\nI0819 01:05:08.599736    2352 log.go:172] (0x4000136dc0) (0x4000774140) Stream added, broadcasting: 5\nI0819 01:05:08.601471    2352 log.go:172] (0x4000136dc0) Reply frame received for 5\nI0819 01:05:08.669886    2352 log.go:172] (0x4000136dc0) Data frame received for 5\nI0819 01:05:08.670243    2352 log.go:172] (0x4000774140) (5) Data frame handling\nI0819 01:05:08.670848    2352 log.go:172] (0x4000774140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 01:05:09.142595    2352 log.go:172] (0x4000136dc0) Data frame received for 5\nI0819 01:05:09.142862    2352 log.go:172] (0x4000774140) (5) Data frame handling\nI0819 01:05:09.143254    2352 log.go:172] (0x4000136dc0) Data frame received for 3\nI0819 01:05:09.143515    2352 log.go:172] (0x40007740a0) (3) Data frame handling\nI0819 01:05:09.143743    2352 log.go:172] (0x40007740a0) (3) Data frame sent\nI0819 01:05:09.143932    2352 log.go:172] (0x4000136dc0) Data frame received for 3\nI0819 01:05:09.144089    2352 log.go:172] (0x40007740a0) (3) Data frame handling\nI0819 01:05:09.144243    2352 log.go:172] (0x4000136dc0) Data frame received for 1\nI0819 01:05:09.144330    2352 log.go:172] (0x400062e960) (1) Data frame handling\nI0819 01:05:09.144407    2352 log.go:172] (0x400062e960) (1) Data frame sent\nI0819 01:05:09.146247    2352 log.go:172] (0x4000136dc0) (0x400062e960) Stream removed, broadcasting: 1\nI0819 01:05:09.149260    2352 log.go:172] (0x4000136dc0) Go away received\nI0819 01:05:09.153188    2352 log.go:172] (0x4000136dc0) (0x400062e960) Stream removed, broadcasting: 1\nI0819 01:05:09.153688    2352 log.go:172] (0x4000136dc0) (0x40007740a0) Stream removed, broadcasting: 3\nI0819 01:05:09.153890    2352 log.go:172] (0x4000136dc0) (0x4000774140) Stream removed, broadcasting: 5\n"
Aug 19 01:05:09.164: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 01:05:09.165: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 19 01:05:19.209: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 19 01:05:29.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8926 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 01:05:31.477: INFO: stderr: "I0819 01:05:31.328711    2383 log.go:172] (0x40008f4630) (0x40006806e0) Create stream\nI0819 01:05:31.333692    2383 log.go:172] (0x40008f4630) (0x40006806e0) Stream added, broadcasting: 1\nI0819 01:05:31.354706    2383 log.go:172] (0x40008f4630) Reply frame received for 1\nI0819 01:05:31.355510    2383 log.go:172] (0x40008f4630) (0x4000680780) Create stream\nI0819 01:05:31.355619    2383 log.go:172] (0x40008f4630) (0x4000680780) Stream added, broadcasting: 3\nI0819 01:05:31.357948    2383 log.go:172] (0x40008f4630) Reply frame received for 3\nI0819 01:05:31.358351    2383 log.go:172] (0x40008f4630) (0x40009a4000) Create stream\nI0819 01:05:31.358448    2383 log.go:172] (0x40008f4630) (0x40009a4000) Stream added, broadcasting: 5\nI0819 01:05:31.359968    2383 log.go:172] (0x40008f4630) Reply frame received for 5\nI0819 01:05:31.457827    2383 log.go:172] (0x40008f4630) Data frame received for 3\nI0819 01:05:31.458027    2383 log.go:172] (0x40008f4630) Data frame received for 1\nI0819 01:05:31.458210    2383 log.go:172] (0x40006806e0) (1) Data frame handling\nI0819 01:05:31.458481    2383 log.go:172] (0x40008f4630) Data frame received for 5\nI0819 01:05:31.458676    2383 log.go:172] (0x40009a4000) (5) Data frame handling\nI0819 01:05:31.458941    2383 log.go:172] (0x4000680780) (3) Data frame handling\nI0819 01:05:31.459568    2383 log.go:172] (0x40009a4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 01:05:31.460019    2383 log.go:172] (0x40008f4630) Data frame received for 5\nI0819 01:05:31.460093    2383 log.go:172] (0x40009a4000) (5) Data frame handling\nI0819 01:05:31.460214    2383 log.go:172] (0x4000680780) (3) Data frame sent\nI0819 01:05:31.460320    2383 log.go:172] (0x40008f4630) Data frame received for 3\nI0819 01:05:31.460435    2383 log.go:172] (0x40006806e0) (1) Data frame sent\nI0819 01:05:31.460652    2383 log.go:172] (0x4000680780) (3) Data frame handling\nI0819 01:05:31.461679    2383 log.go:172] (0x40008f4630) (0x40006806e0) Stream removed, broadcasting: 1\nI0819 01:05:31.464116    2383 log.go:172] (0x40008f4630) Go away received\nI0819 01:05:31.466387    2383 log.go:172] (0x40008f4630) (0x40006806e0) Stream removed, broadcasting: 1\nI0819 01:05:31.466823    2383 log.go:172] (0x40008f4630) (0x4000680780) Stream removed, broadcasting: 3\nI0819 01:05:31.467260    2383 log.go:172] (0x40008f4630) (0x40009a4000) Stream removed, broadcasting: 5\n"
Aug 19 01:05:31.478: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 01:05:31.479: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 01:05:41.642: INFO: Waiting for StatefulSet statefulset-8926/ss2 to complete update
Aug 19 01:05:41.643: INFO: Waiting for Pod statefulset-8926/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 01:05:41.643: INFO: Waiting for Pod statefulset-8926/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 01:05:41.643: INFO: Waiting for Pod statefulset-8926/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 01:05:51.851: INFO: Waiting for StatefulSet statefulset-8926/ss2 to complete update
Aug 19 01:05:51.851: INFO: Waiting for Pod statefulset-8926/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 19 01:06:01.656: INFO: Waiting for StatefulSet statefulset-8926/ss2 to complete update
Aug 19 01:06:01.656: INFO: Waiting for Pod statefulset-8926/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Aug 19 01:06:11.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8926 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 19 01:06:13.512: INFO: stderr: "I0819 01:06:13.056166    2406 log.go:172] (0x400056a000) (0x40008d41e0) Create stream\nI0819 01:06:13.058757    2406 log.go:172] (0x400056a000) (0x40008d41e0) Stream added, broadcasting: 1\nI0819 01:06:13.069121    2406 log.go:172] (0x400056a000) Reply frame received for 1\nI0819 01:06:13.069876    2406 log.go:172] (0x400056a000) (0x40008e2000) Create stream\nI0819 01:06:13.069979    2406 log.go:172] (0x400056a000) (0x40008e2000) Stream added, broadcasting: 3\nI0819 01:06:13.071984    2406 log.go:172] (0x400056a000) Reply frame received for 3\nI0819 01:06:13.072632    2406 log.go:172] (0x400056a000) (0x4000545ae0) Create stream\nI0819 01:06:13.072872    2406 log.go:172] (0x400056a000) (0x4000545ae0) Stream added, broadcasting: 5\nI0819 01:06:13.074412    2406 log.go:172] (0x400056a000) Reply frame received for 5\nI0819 01:06:13.128422    2406 log.go:172] (0x400056a000) Data frame received for 5\nI0819 01:06:13.128642    2406 log.go:172] (0x4000545ae0) (5) Data frame handling\nI0819 01:06:13.129255    2406 log.go:172] (0x4000545ae0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0819 01:06:13.492436    2406 log.go:172] (0x400056a000) Data frame received for 3\nI0819 01:06:13.492683    2406 log.go:172] (0x40008e2000) (3) Data frame handling\nI0819 01:06:13.492958    2406 log.go:172] (0x400056a000) Data frame received for 5\nI0819 01:06:13.493177    2406 log.go:172] (0x4000545ae0) (5) Data frame handling\nI0819 01:06:13.493433    2406 log.go:172] (0x40008e2000) (3) Data frame sent\nI0819 01:06:13.493610    2406 log.go:172] (0x400056a000) Data frame received for 3\nI0819 01:06:13.493763    2406 log.go:172] (0x40008e2000) (3) Data frame handling\nI0819 01:06:13.494981    2406 log.go:172] (0x400056a000) Data frame received for 1\nI0819 01:06:13.495125    2406 log.go:172] (0x40008d41e0) (1) Data frame handling\nI0819 01:06:13.495250    2406 log.go:172] (0x40008d41e0) (1) Data frame sent\nI0819 01:06:13.496627    2406 log.go:172] (0x400056a000) (0x40008d41e0) Stream removed, broadcasting: 1\nI0819 01:06:13.500132    2406 log.go:172] (0x400056a000) Go away received\nI0819 01:06:13.503063    2406 log.go:172] (0x400056a000) (0x40008d41e0) Stream removed, broadcasting: 1\nI0819 01:06:13.503434    2406 log.go:172] (0x400056a000) (0x40008e2000) Stream removed, broadcasting: 3\nI0819 01:06:13.503719    2406 log.go:172] (0x400056a000) (0x4000545ae0) Stream removed, broadcasting: 5\n"
Aug 19 01:06:13.514: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 19 01:06:13.514: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 19 01:06:23.586: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 19 01:06:33.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8926 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 19 01:06:35.382: INFO: stderr: "I0819 01:06:35.287281    2429 log.go:172] (0x4000816210) (0x40009901e0) Create stream\nI0819 01:06:35.293525    2429 log.go:172] (0x4000816210) (0x40009901e0) Stream added, broadcasting: 1\nI0819 01:06:35.308974    2429 log.go:172] (0x4000816210) Reply frame received for 1\nI0819 01:06:35.310028    2429 log.go:172] (0x4000816210) (0x4000990280) Create stream\nI0819 01:06:35.310125    2429 log.go:172] (0x4000816210) (0x4000990280) Stream added, broadcasting: 3\nI0819 01:06:35.312060    2429 log.go:172] (0x4000816210) Reply frame received for 3\nI0819 01:06:35.312387    2429 log.go:172] (0x4000816210) (0x4000990320) Create stream\nI0819 01:06:35.312470    2429 log.go:172] (0x4000816210) (0x4000990320) Stream added, broadcasting: 5\nI0819 01:06:35.314063    2429 log.go:172] (0x4000816210) Reply frame received for 5\nI0819 01:06:35.367554    2429 log.go:172] (0x4000816210) Data frame received for 3\nI0819 01:06:35.367755    2429 log.go:172] (0x4000816210) Data frame received for 5\nI0819 01:06:35.367912    2429 log.go:172] (0x4000816210) Data frame received for 1\nI0819 01:06:35.368169    2429 log.go:172] (0x4000990280) (3) Data frame handling\nI0819 01:06:35.368318    2429 log.go:172] (0x40009901e0) (1) Data frame handling\nI0819 01:06:35.368421    2429 log.go:172] (0x4000990320) (5) Data frame handling\nI0819 01:06:35.369509    2429 log.go:172] (0x40009901e0) (1) Data frame sent\nI0819 01:06:35.369591    2429 log.go:172] (0x4000990320) (5) Data frame sent\nI0819 01:06:35.369842    2429 log.go:172] (0x4000990280) (3) Data frame sent\nI0819 01:06:35.370005    2429 log.go:172] (0x4000816210) Data frame received for 3\nI0819 01:06:35.370093    2429 log.go:172] (0x4000990280) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0819 01:06:35.370248    2429 log.go:172] (0x4000816210) Data frame received for 5\nI0819 01:06:35.370424    2429 log.go:172] (0x4000990320) (5) Data frame handling\nI0819 01:06:35.373099    2429 log.go:172] (0x4000816210) (0x40009901e0) Stream removed, broadcasting: 1\nI0819 01:06:35.373802    2429 log.go:172] (0x4000816210) Go away received\nI0819 01:06:35.376266    2429 log.go:172] (0x4000816210) (0x40009901e0) Stream removed, broadcasting: 1\nI0819 01:06:35.376639    2429 log.go:172] (0x4000816210) (0x4000990280) Stream removed, broadcasting: 3\nI0819 01:06:35.376856    2429 log.go:172] (0x4000816210) (0x4000990320) Stream removed, broadcasting: 5\n"
Aug 19 01:06:35.384: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 19 01:06:35.384: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 19 01:07:05.419: INFO: Waiting for StatefulSet statefulset-8926/ss2 to complete update
Aug 19 01:07:05.420: INFO: Waiting for Pod statefulset-8926/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 19 01:07:15.509: INFO: Waiting for StatefulSet statefulset-8926/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 19 01:07:25.436: INFO: Deleting all statefulset in ns statefulset-8926
Aug 19 01:07:25.455: INFO: Scaling statefulset ss2 to 0
Aug 19 01:07:45.480: INFO: Waiting for statefulset status.replicas updated to 0
Aug 19 01:07:45.483: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:07:45.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8926" for this suite.
Aug 19 01:07:53.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:07:53.699: INFO: namespace statefulset-8926 deletion completed in 8.192907336s

• [SLOW TEST:201.238 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:07:53.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-49q9
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 01:07:53.793: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-49q9" in namespace "subpath-2019" to be "success or failure"
Aug 19 01:07:53.851: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Pending", Reason="", readiness=false. Elapsed: 57.778052ms
Aug 19 01:07:55.859: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065754826s
Aug 19 01:07:57.866: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 4.073381977s
Aug 19 01:07:59.874: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 6.081266728s
Aug 19 01:08:01.882: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 8.088745228s
Aug 19 01:08:03.888: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 10.09483595s
Aug 19 01:08:05.894: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 12.101596573s
Aug 19 01:08:07.901: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 14.107968725s
Aug 19 01:08:09.907: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 16.114709144s
Aug 19 01:08:11.915: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 18.122442773s
Aug 19 01:08:13.921: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 20.128172673s
Aug 19 01:08:15.927: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 22.134536566s
Aug 19 01:08:17.935: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Running", Reason="", readiness=true. Elapsed: 24.142456665s
Aug 19 01:08:19.943: INFO: Pod "pod-subpath-test-configmap-49q9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.150109917s
STEP: Saw pod success
Aug 19 01:08:19.943: INFO: Pod "pod-subpath-test-configmap-49q9" satisfied condition "success or failure"
Aug 19 01:08:19.948: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-49q9 container test-container-subpath-configmap-49q9: 
STEP: delete the pod
Aug 19 01:08:19.966: INFO: Waiting for pod pod-subpath-test-configmap-49q9 to disappear
Aug 19 01:08:19.977: INFO: Pod pod-subpath-test-configmap-49q9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-49q9
Aug 19 01:08:19.977: INFO: Deleting pod "pod-subpath-test-configmap-49q9" in namespace "subpath-2019"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:08:19.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2019" for this suite.
Aug 19 01:08:26.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:08:26.127: INFO: namespace subpath-2019 deletion completed in 6.138552261s

• [SLOW TEST:32.423 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:08:26.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-0d903906-9528-435e-a47f-3fd400c2e728
STEP: Creating secret with name s-test-opt-upd-a16c3079-f5e7-4aa2-b7ea-6513e7b0e3fa
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0d903906-9528-435e-a47f-3fd400c2e728
STEP: Updating secret s-test-opt-upd-a16c3079-f5e7-4aa2-b7ea-6513e7b0e3fa
STEP: Creating secret with name s-test-opt-create-284b23c1-4238-4eb8-b0d6-65429372ef11
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:08:36.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2834" for this suite.
Aug 19 01:09:00.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:09:00.876: INFO: namespace projected-2834 deletion completed in 24.150756958s

• [SLOW TEST:34.745 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:09:00.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0819 01:09:31.048871       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 01:09:31.049: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:09:31.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5810" for this suite.
Aug 19 01:09:39.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:09:39.678: INFO: namespace gc-5810 deletion completed in 8.622747507s

• [SLOW TEST:38.798 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:09:39.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 01:09:46.457: INFO: Successfully updated pod "annotationupdate6b61c304-e80f-479f-aa67-cfb03a6c2489"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:09:48.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-534" for this suite.
Aug 19 01:10:10.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:10:11.092: INFO: namespace projected-534 deletion completed in 22.147753849s

• [SLOW TEST:31.411 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:10:11.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug 19 01:10:11.469: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:10:12.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1698" for this suite.
Aug 19 01:10:18.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:10:19.038: INFO: namespace kubectl-1698 deletion completed in 6.370837728s

• [SLOW TEST:7.944 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:10:19.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Aug 19 01:10:19.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8579'
Aug 19 01:10:21.220: INFO: stderr: ""
Aug 19 01:10:21.220: INFO: stdout: "pod/pause created\n"
Aug 19 01:10:21.221: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 19 01:10:21.222: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8579" to be "running and ready"
Aug 19 01:10:21.291: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 69.361379ms
Aug 19 01:10:23.298: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076587367s
Aug 19 01:10:25.305: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083590463s
Aug 19 01:10:27.312: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.090072059s
Aug 19 01:10:27.312: INFO: Pod "pause" satisfied condition "running and ready"
Aug 19 01:10:27.313: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 19 01:10:27.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8579'
Aug 19 01:10:28.854: INFO: stderr: ""
Aug 19 01:10:28.854: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 19 01:10:28.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8579'
Aug 19 01:10:30.154: INFO: stderr: ""
Aug 19 01:10:30.154: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 19 01:10:30.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8579'
Aug 19 01:10:31.382: INFO: stderr: ""
Aug 19 01:10:31.383: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 19 01:10:31.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8579'
Aug 19 01:10:32.692: INFO: stderr: ""
Aug 19 01:10:32.693: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Aug 19 01:10:32.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8579'
Aug 19 01:10:34.146: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 01:10:34.146: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 19 01:10:34.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8579'
Aug 19 01:10:35.490: INFO: stderr: "No resources found.\n"
Aug 19 01:10:35.490: INFO: stdout: ""
Aug 19 01:10:35.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8579 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 01:10:36.817: INFO: stderr: ""
Aug 19 01:10:36.817: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:10:36.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8579" for this suite.
Aug 19 01:10:43.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:10:43.153: INFO: namespace kubectl-8579 deletion completed in 6.325731069s

• [SLOW TEST:24.115 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:10:43.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-a923beeb-4bc2-437b-8d9e-94b59f77b961
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:10:49.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2186" for this suite.
Aug 19 01:11:11.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:11:11.536: INFO: namespace configmap-2186 deletion completed in 22.156701278s

• [SLOW TEST:28.380 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:11:11.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:11:12.483: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 19 01:11:17.491: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 19 01:11:17.492: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 19 01:11:19.500: INFO: Creating deployment "test-rollover-deployment"
Aug 19 01:11:19.511: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 19 01:11:21.536: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 19 01:11:21.548: INFO: Ensure that both replica sets have 1 created replica
Aug 19 01:11:21.557: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 19 01:11:21.568: INFO: Updating deployment test-rollover-deployment
Aug 19 01:11:21.568: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 19 01:11:23.580: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 19 01:11:23.589: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 19 01:11:23.597: INFO: all replica sets need to contain the pod-template-hash label
Aug 19 01:11:23.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396281, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 01:11:25.614: INFO: all replica sets need to contain the pod-template-hash label
Aug 19 01:11:25.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396281, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 01:11:27.615: INFO: all replica sets need to contain the pod-template-hash label
Aug 19 01:11:27.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 01:11:29.617: INFO: all replica sets need to contain the pod-template-hash label
Aug 19 01:11:29.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 01:11:31.614: INFO: all replica sets need to contain the pod-template-hash label
Aug 19 01:11:31.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 01:11:33.629: INFO: all replica sets need to contain the pod-template-hash label
Aug 19 01:11:33.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 01:11:35.614: INFO: all replica sets need to contain the pod-template-hash label
Aug 19 01:11:35.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396285, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396279, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 19 01:11:37.614: INFO: 
Aug 19 01:11:37.614: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 19 01:11:37.802: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-675,SelfLink:/apis/apps/v1/namespaces/deployment-675/deployments/test-rollover-deployment,UID:53a62a2a-e573-4a73-b7e8-7a3cb76f0673,ResourceVersion:944489,Generation:2,CreationTimestamp:2020-08-19 01:11:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-19 01:11:19 +0000 UTC 2020-08-19 01:11:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-19 01:11:36 +0000 UTC 2020-08-19 01:11:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 19 01:11:37.811: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-675,SelfLink:/apis/apps/v1/namespaces/deployment-675/replicasets/test-rollover-deployment-854595fc44,UID:4ac15d5f-f9a7-4338-9960-3385436b2976,ResourceVersion:944478,Generation:2,CreationTimestamp:2020-08-19 01:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 53a62a2a-e573-4a73-b7e8-7a3cb76f0673 0x40036546f7 0x40036546f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 19 01:11:37.811: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 19 01:11:37.812: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-675,SelfLink:/apis/apps/v1/namespaces/deployment-675/replicasets/test-rollover-controller,UID:493194b5-857e-4139-933c-35046c276b06,ResourceVersion:944488,Generation:2,CreationTimestamp:2020-08-19 01:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 53a62a2a-e573-4a73-b7e8-7a3cb76f0673 0x4003654627 0x4003654628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 01:11:37.814: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-675,SelfLink:/apis/apps/v1/namespaces/deployment-675/replicasets/test-rollover-deployment-9b8b997cf,UID:be7fff87-b918-46b0-9621-4f4a882c16fb,ResourceVersion:944442,Generation:2,CreationTimestamp:2020-08-19 01:11:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 53a62a2a-e573-4a73-b7e8-7a3cb76f0673 0x40036547c0 0x40036547c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 19 01:11:37.821: INFO: Pod "test-rollover-deployment-854595fc44-qxkl6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-qxkl6,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-675,SelfLink:/api/v1/namespaces/deployment-675/pods/test-rollover-deployment-854595fc44-qxkl6,UID:1933c7a9-2597-46de-91da-41cb95f233c7,ResourceVersion:944456,Generation:0,CreationTimestamp:2020-08-19 01:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 4ac15d5f-f9a7-4338-9960-3385436b2976 0x4003655387 0x4003655388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t4zsg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t4zsg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-t4zsg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003655400} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003655420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:11:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:11:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:11:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:11:21 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.95,StartTime:2020-08-19 01:11:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-19 01:11:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://627212a8af670a152de438a8395dc4fa214d2af78900a83b703f424e521331fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:11:37.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-675" for this suite.
Aug 19 01:11:46.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:11:46.203: INFO: namespace deployment-675 deletion completed in 8.374080219s

• [SLOW TEST:34.663 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:11:46.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6425
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6425
STEP: Deleting pre-stop pod
Aug 19 01:12:00.006: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:12:00.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6425" for this suite.
Aug 19 01:12:40.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:12:40.211: INFO: namespace prestop-6425 deletion completed in 40.181202405s

• [SLOW TEST:54.007 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:12:40.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 19 01:12:40.953: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 19 01:12:52.223: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:12:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4884" for this suite.
Aug 19 01:13:00.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:13:00.390: INFO: namespace pods-4884 deletion completed in 8.151805099s

• [SLOW TEST:20.178 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:13:00.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:13:20.846: INFO: Container started at 2020-08-19 01:13:03 +0000 UTC, pod became ready at 2020-08-19 01:13:20 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:13:20.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5193" for this suite.
Aug 19 01:13:42.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:13:43.125: INFO: namespace container-probe-5193 deletion completed in 22.269499464s

• [SLOW TEST:42.732 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:13:43.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 19 01:13:48.795: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:13:48.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1978" for this suite.
Aug 19 01:13:54.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:13:55.049: INFO: namespace container-runtime-1978 deletion completed in 6.121728218s

• [SLOW TEST:11.918 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:13:55.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 01:13:55.145: INFO: PodSpec: initContainers in spec.initContainers
Aug 19 01:14:47.652: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-785c5e0a-3ee6-4e5a-b818-59f6ec8a9a4c", GenerateName:"", Namespace:"init-container-8069", SelfLink:"/api/v1/namespaces/init-container-8069/pods/pod-init-785c5e0a-3ee6-4e5a-b818-59f6ec8a9a4c", UID:"1d07908d-cb94-4392-9e1e-dd535a84d6ad", ResourceVersion:"945053", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733396435, loc:(*time.Location)(0x792fa60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"143922441"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2fm5c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4003057780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2fm5c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2fm5c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2fm5c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4002cec198), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002684960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4002cec220)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4002cec240)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4002cec248), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4002cec24c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396435, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396435, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396435, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733396435, loc:(*time.Location)(0x792fa60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.210", StartTime:(*v1.Time)(0x4002359860), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x40023598a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40026279d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://45e0f3cabb5384e374487da66278f707f24d01df58ecb60d38f6e703c554f94a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40023598c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4002359880), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:14:47.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8069" for this suite.
Aug 19 01:15:09.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:15:09.960: INFO: namespace init-container-8069 deletion completed in 22.157800684s

• [SLOW TEST:74.911 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:15:09.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 19 01:15:10.343: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 19 01:15:15.350: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:15:15.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9357" for this suite.
Aug 19 01:15:21.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:15:21.740: INFO: namespace replication-controller-9357 deletion completed in 6.288750866s

• [SLOW TEST:11.774 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:15:21.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-66514bef-c447-4b3b-bed1-e9c02b2ccd82
STEP: Creating a pod to test consume configMaps
Aug 19 01:15:21.830: INFO: Waiting up to 5m0s for pod "pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b" in namespace "configmap-5863" to be "success or failure"
Aug 19 01:15:21.882: INFO: Pod "pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.487658ms
Aug 19 01:15:23.889: INFO: Pod "pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058266887s
Aug 19 01:15:25.942: INFO: Pod "pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11109119s
Aug 19 01:15:27.948: INFO: Pod "pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117871608s
STEP: Saw pod success
Aug 19 01:15:27.949: INFO: Pod "pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b" satisfied condition "success or failure"
Aug 19 01:15:27.957: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b container configmap-volume-test: 
STEP: delete the pod
Aug 19 01:15:27.975: INFO: Waiting for pod pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b to disappear
Aug 19 01:15:28.019: INFO: Pod pod-configmaps-36c7561b-e489-41c9-89dd-6aab27cc7e7b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:15:28.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5863" for this suite.
Aug 19 01:15:34.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:15:34.237: INFO: namespace configmap-5863 deletion completed in 6.208921685s

• [SLOW TEST:12.495 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:15:34.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 19 01:15:34.812: INFO: Waiting up to 5m0s for pod "pod-fba67899-2e21-4440-a922-67c55207227d" in namespace "emptydir-7264" to be "success or failure"
Aug 19 01:15:34.823: INFO: Pod "pod-fba67899-2e21-4440-a922-67c55207227d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.327258ms
Aug 19 01:15:36.894: INFO: Pod "pod-fba67899-2e21-4440-a922-67c55207227d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08185552s
Aug 19 01:15:38.902: INFO: Pod "pod-fba67899-2e21-4440-a922-67c55207227d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089526235s
Aug 19 01:15:40.912: INFO: Pod "pod-fba67899-2e21-4440-a922-67c55207227d": Phase="Running", Reason="", readiness=true. Elapsed: 6.100104429s
Aug 19 01:15:43.083: INFO: Pod "pod-fba67899-2e21-4440-a922-67c55207227d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.271435474s
STEP: Saw pod success
Aug 19 01:15:43.084: INFO: Pod "pod-fba67899-2e21-4440-a922-67c55207227d" satisfied condition "success or failure"
Aug 19 01:15:43.090: INFO: Trying to get logs from node iruya-worker pod pod-fba67899-2e21-4440-a922-67c55207227d container test-container: 
STEP: delete the pod
Aug 19 01:15:43.296: INFO: Waiting for pod pod-fba67899-2e21-4440-a922-67c55207227d to disappear
Aug 19 01:15:43.521: INFO: Pod pod-fba67899-2e21-4440-a922-67c55207227d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:15:43.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7264" for this suite.
Aug 19 01:15:51.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:15:52.029: INFO: namespace emptydir-7264 deletion completed in 8.371778208s

• [SLOW TEST:17.786 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:15:52.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:16:01.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-706" for this suite.
Aug 19 01:16:25.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:16:25.756: INFO: namespace replication-controller-706 deletion completed in 24.281965269s

• [SLOW TEST:33.727 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:16:25.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 01:16:34.596: INFO: Successfully updated pod "labelsupdate5d10caa7-91c8-4fdc-bde0-db8cfe992603"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:16:36.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-457" for this suite.
Aug 19 01:16:59.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:16:59.193: INFO: namespace downward-api-457 deletion completed in 22.507595288s

• [SLOW TEST:33.434 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:16:59.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9a270373-1018-46ad-a119-5b8cc3989fee
STEP: Creating a pod to test consume secrets
Aug 19 01:16:59.331: INFO: Waiting up to 5m0s for pod "pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd" in namespace "secrets-1660" to be "success or failure"
Aug 19 01:16:59.354: INFO: Pod "pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.417488ms
Aug 19 01:17:01.360: INFO: Pod "pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028703424s
Aug 19 01:17:03.366: INFO: Pod "pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03452912s
Aug 19 01:17:05.371: INFO: Pod "pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040052188s
STEP: Saw pod success
Aug 19 01:17:05.371: INFO: Pod "pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd" satisfied condition "success or failure"
Aug 19 01:17:05.376: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd container secret-env-test: 
STEP: delete the pod
Aug 19 01:17:05.399: INFO: Waiting for pod pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd to disappear
Aug 19 01:17:05.403: INFO: Pod pod-secrets-7979bce5-dfc8-4434-bfe3-9ec76ed53fbd no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:17:05.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1660" for this suite.
Aug 19 01:17:11.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:17:11.594: INFO: namespace secrets-1660 deletion completed in 6.181685816s

• [SLOW TEST:12.398 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:17:11.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-0af9d34f-6f0a-43f9-969b-a284cd9f7d3d
STEP: Creating a pod to test consume secrets
Aug 19 01:17:11.674: INFO: Waiting up to 5m0s for pod "pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15" in namespace "secrets-9430" to be "success or failure"
Aug 19 01:17:11.691: INFO: Pod "pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15": Phase="Pending", Reason="", readiness=false. Elapsed: 16.15777ms
Aug 19 01:17:13.696: INFO: Pod "pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021677154s
Aug 19 01:17:15.702: INFO: Pod "pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027095386s
STEP: Saw pod success
Aug 19 01:17:15.702: INFO: Pod "pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15" satisfied condition "success or failure"
Aug 19 01:17:15.705: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15 container secret-volume-test: 
STEP: delete the pod
Aug 19 01:17:15.734: INFO: Waiting for pod pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15 to disappear
Aug 19 01:17:15.756: INFO: Pod pod-secrets-74e210f9-15ce-4c17-9418-1495e00efa15 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:17:15.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9430" for this suite.
Aug 19 01:17:21.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:17:21.904: INFO: namespace secrets-9430 deletion completed in 6.137720332s

• [SLOW TEST:10.309 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:17:21.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 19 01:17:26.007: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d0e1e7f7-1387-4e31-b6e9-cc983b7e934a,GenerateName:,Namespace:events-4474,SelfLink:/api/v1/namespaces/events-4474/pods/send-events-d0e1e7f7-1387-4e31-b6e9-cc983b7e934a,UID:32761d54-e782-4612-9700-69e0e39432b3,ResourceVersion:945582,Generation:0,CreationTimestamp:2020-08-19 01:17:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 968478957,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7slpn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7slpn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-7slpn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4003a2b560} {node.kubernetes.io/unreachable Exists  NoExecute 0x4003a2b580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:17:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:17:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:17:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 01:17:21 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.216,StartTime:2020-08-19 01:17:22 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-19 01:17:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://bc0c94f5a9ebcdf0115b375d36a18a8e502c9460c2013b4c45198823994d78e3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Aug 19 01:17:28.017: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 19 01:17:30.025: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:17:30.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4474" for this suite.
Aug 19 01:18:16.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:18:16.252: INFO: namespace events-4474 deletion completed in 46.173301929s

• [SLOW TEST:54.345 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:18:16.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-13450416-e390-49cb-9960-3d5d31e594f7
STEP: Creating a pod to test consume secrets
Aug 19 01:18:16.355: INFO: Waiting up to 5m0s for pod "pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c" in namespace "secrets-8619" to be "success or failure"
Aug 19 01:18:16.363: INFO: Pod "pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45801ms
Aug 19 01:18:18.370: INFO: Pod "pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015114742s
Aug 19 01:18:20.378: INFO: Pod "pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023009461s
STEP: Saw pod success
Aug 19 01:18:20.378: INFO: Pod "pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c" satisfied condition "success or failure"
Aug 19 01:18:20.384: INFO: Trying to get logs from node iruya-worker pod pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c container secret-volume-test: 
STEP: delete the pod
Aug 19 01:18:20.419: INFO: Waiting for pod pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c to disappear
Aug 19 01:18:20.441: INFO: Pod pod-secrets-4771faa4-364c-48e2-a8bc-402271390c2c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:18:20.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8619" for this suite.
Aug 19 01:18:26.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:18:26.634: INFO: namespace secrets-8619 deletion completed in 6.181835982s

• [SLOW TEST:10.380 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:18:26.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Aug 19 01:18:26.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3289 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 19 01:18:34.378: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0819 01:18:34.213808    2647 log.go:172] (0x4000151290) (0x400060e780) Create stream\nI0819 01:18:34.216362    2647 log.go:172] (0x4000151290) (0x400060e780) Stream added, broadcasting: 1\nI0819 01:18:34.232041    2647 log.go:172] (0x4000151290) Reply frame received for 1\nI0819 01:18:34.232643    2647 log.go:172] (0x4000151290) (0x4000395900) Create stream\nI0819 01:18:34.232718    2647 log.go:172] (0x4000151290) (0x4000395900) Stream added, broadcasting: 3\nI0819 01:18:34.234161    2647 log.go:172] (0x4000151290) Reply frame received for 3\nI0819 01:18:34.234405    2647 log.go:172] (0x4000151290) (0x400078c0a0) Create stream\nI0819 01:18:34.234470    2647 log.go:172] (0x4000151290) (0x400078c0a0) Stream added, broadcasting: 5\nI0819 01:18:34.235418    2647 log.go:172] (0x4000151290) Reply frame received for 5\nI0819 01:18:34.235664    2647 log.go:172] (0x4000151290) (0x400060e0a0) Create stream\nI0819 01:18:34.235730    2647 log.go:172] (0x4000151290) (0x400060e0a0) Stream added, broadcasting: 7\nI0819 01:18:34.236645    2647 log.go:172] (0x4000151290) Reply frame received for 7\nI0819 01:18:34.239151    2647 log.go:172] (0x4000395900) (3) Writing data frame\nI0819 01:18:34.240311    2647 log.go:172] (0x4000395900) (3) Writing data frame\nI0819 01:18:34.241352    2647 log.go:172] (0x4000151290) Data frame received for 5\nI0819 01:18:34.241523    2647 log.go:172] (0x400078c0a0) (5) Data frame handling\nI0819 01:18:34.241776    2647 log.go:172] (0x400078c0a0) (5) Data frame sent\nI0819 01:18:34.242087    2647 log.go:172] (0x4000151290) Data frame received for 5\nI0819 01:18:34.242145    2647 log.go:172] (0x400078c0a0) (5) Data frame handling\nI0819 01:18:34.242215    2647 log.go:172] (0x400078c0a0) (5) Data frame sent\nI0819 01:18:34.279315    2647 log.go:172] (0x4000151290) Data frame received for 5\nI0819 01:18:34.279542    2647 log.go:172] (0x400078c0a0) (5) Data frame handling\nI0819 01:18:34.279691    2647 log.go:172] (0x4000151290) Data frame received for 7\nI0819 01:18:34.279890    2647 log.go:172] (0x400060e0a0) (7) Data frame handling\nI0819 01:18:34.280124    2647 log.go:172] (0x4000151290) Data frame received for 1\nI0819 01:18:34.280278    2647 log.go:172] (0x400060e780) (1) Data frame handling\nI0819 01:18:34.280435    2647 log.go:172] (0x400060e780) (1) Data frame sent\nI0819 01:18:34.282351    2647 log.go:172] (0x4000151290) (0x400060e780) Stream removed, broadcasting: 1\nI0819 01:18:34.285195    2647 log.go:172] (0x4000151290) (0x4000395900) Stream removed, broadcasting: 3\nI0819 01:18:34.285818    2647 log.go:172] (0x4000151290) Go away received\nI0819 01:18:34.286961    2647 log.go:172] (0x4000151290) (0x400060e780) Stream removed, broadcasting: 1\nI0819 01:18:34.288474    2647 log.go:172] (0x4000151290) (0x4000395900) Stream removed, broadcasting: 3\nI0819 01:18:34.288569    2647 log.go:172] (0x4000151290) (0x400078c0a0) Stream removed, broadcasting: 5\nI0819 01:18:34.289177    2647 log.go:172] (0x4000151290) (0x400060e0a0) Stream removed, broadcasting: 7\n"
Aug 19 01:18:34.379: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:18:36.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3289" for this suite.
Aug 19 01:18:42.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:18:42.576: INFO: namespace kubectl-3289 deletion completed in 6.171864706s

• [SLOW TEST:15.940 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:18:42.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 19 01:18:47.272: INFO: Successfully updated pod "annotationupdate04c75ce2-e0be-473d-bf98-8864bf8b80ba"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:18:49.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5194" for this suite.
Aug 19 01:19:11.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:19:11.470: INFO: namespace downward-api-5194 deletion completed in 22.155492666s

• [SLOW TEST:28.893 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:19:11.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 19 01:19:11.582: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6902,SelfLink:/api/v1/namespaces/watch-6902/configmaps/e2e-watch-test-watch-closed,UID:8206b4f2-bbb6-4b24-b86d-40df8f53075a,ResourceVersion:945891,Generation:0,CreationTimestamp:2020-08-19 01:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 19 01:19:11.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6902,SelfLink:/api/v1/namespaces/watch-6902/configmaps/e2e-watch-test-watch-closed,UID:8206b4f2-bbb6-4b24-b86d-40df8f53075a,ResourceVersion:945892,Generation:0,CreationTimestamp:2020-08-19 01:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 19 01:19:11.605: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6902,SelfLink:/api/v1/namespaces/watch-6902/configmaps/e2e-watch-test-watch-closed,UID:8206b4f2-bbb6-4b24-b86d-40df8f53075a,ResourceVersion:945893,Generation:0,CreationTimestamp:2020-08-19 01:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 01:19:11.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6902,SelfLink:/api/v1/namespaces/watch-6902/configmaps/e2e-watch-test-watch-closed,UID:8206b4f2-bbb6-4b24-b86d-40df8f53075a,ResourceVersion:945894,Generation:0,CreationTimestamp:2020-08-19 01:19:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:19:11.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6902" for this suite.
Aug 19 01:19:17.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:19:17.763: INFO: namespace watch-6902 deletion completed in 6.14454409s

• [SLOW TEST:6.290 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:19:17.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8428.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8428.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8428.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8428.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8428.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8428.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 01:19:23.943: INFO: DNS probes using dns-8428/dns-test-1de55e8d-888c-49a6-bb99-74a5722d1520 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:19:23.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8428" for this suite.
Aug 19 01:19:30.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:19:30.423: INFO: namespace dns-8428 deletion completed in 6.424542482s

• [SLOW TEST:12.659 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:19:30.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 01:19:30.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086" in namespace "downward-api-9069" to be "success or failure"
Aug 19 01:19:30.607: INFO: Pod "downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086": Phase="Pending", Reason="", readiness=false. Elapsed: 24.445319ms
Aug 19 01:19:32.613: INFO: Pod "downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031245225s
Aug 19 01:19:34.621: INFO: Pod "downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086": Phase="Running", Reason="", readiness=true. Elapsed: 4.038633846s
Aug 19 01:19:36.634: INFO: Pod "downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051527148s
STEP: Saw pod success
Aug 19 01:19:36.634: INFO: Pod "downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086" satisfied condition "success or failure"
Aug 19 01:19:36.641: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086 container client-container: 
STEP: delete the pod
Aug 19 01:19:36.658: INFO: Waiting for pod downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086 to disappear
Aug 19 01:19:36.687: INFO: Pod downwardapi-volume-de70dd40-2bbb-433b-b292-431fcec67086 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:19:36.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9069" for this suite.
Aug 19 01:19:42.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:19:42.864: INFO: namespace downward-api-9069 deletion completed in 6.168867992s

• [SLOW TEST:12.440 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:19:42.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Aug 19 01:19:42.954: INFO: Waiting up to 5m0s for pod "var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8" in namespace "var-expansion-7625" to be "success or failure"
Aug 19 01:19:42.980: INFO: Pod "var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8": Phase="Pending", Reason="", readiness=false. Elapsed: 25.686727ms
Aug 19 01:19:44.987: INFO: Pod "var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032968454s
Aug 19 01:19:46.994: INFO: Pod "var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039651668s
STEP: Saw pod success
Aug 19 01:19:46.994: INFO: Pod "var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8" satisfied condition "success or failure"
Aug 19 01:19:46.999: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8 container dapi-container: 
STEP: delete the pod
Aug 19 01:19:47.018: INFO: Waiting for pod var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8 to disappear
Aug 19 01:19:47.046: INFO: Pod var-expansion-05a842a4-ff7f-4ae6-91c6-76830545acf8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:19:47.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7625" for this suite.
Aug 19 01:19:53.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:19:53.249: INFO: namespace var-expansion-7625 deletion completed in 6.194076699s

• [SLOW TEST:10.384 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:19:53.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:19:53.335: INFO: Creating ReplicaSet my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55
Aug 19 01:19:53.357: INFO: Pod name my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55: Found 0 pods out of 1
Aug 19 01:19:58.363: INFO: Pod name my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55: Found 1 pods out of 1
Aug 19 01:19:58.363: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55" is running
Aug 19 01:19:58.368: INFO: Pod "my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55-bxgrw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 01:19:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 01:19:56 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 01:19:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 01:19:53 +0000 UTC Reason: Message:}])
Aug 19 01:19:58.369: INFO: Trying to dial the pod
Aug 19 01:20:03.385: INFO: Controller my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55: Got expected result from replica 1 [my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55-bxgrw]: "my-hostname-basic-9aef5cdb-16a1-4453-871e-fa2e7de1da55-bxgrw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:20:03.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-882" for this suite.
Aug 19 01:20:09.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:20:09.541: INFO: namespace replicaset-882 deletion completed in 6.147635242s

• [SLOW TEST:16.291 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:20:09.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 19 01:20:09.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 19 01:20:09.669: INFO: Waiting for terminating namespaces to be deleted...
Aug 19 01:20:09.673: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 19 01:20:09.684: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:20:09.684: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 01:20:09.684: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:20:09.684: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 01:20:09.684: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 19 01:20:09.695: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:20:09.695: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 01:20:09.695: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:20:09.695: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Aug 19 01:20:09.809: INFO: Pod kindnet-nkf5n requesting resource cpu=100m on Node iruya-worker
Aug 19 01:20:09.809: INFO: Pod kindnet-xsdzz requesting resource cpu=100m on Node iruya-worker2
Aug 19 01:20:09.809: INFO: Pod kube-proxy-5zw8s requesting resource cpu=0m on Node iruya-worker
Aug 19 01:20:09.809: INFO: Pod kube-proxy-b98qt requesting resource cpu=0m on Node iruya-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e654835-6b6c-49d9-bba1-25f1089ef5a0.162c86a4b773c8a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-483/filler-pod-2e654835-6b6c-49d9-bba1-25f1089ef5a0 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e654835-6b6c-49d9-bba1-25f1089ef5a0.162c86a540690e94], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e654835-6b6c-49d9-bba1-25f1089ef5a0.162c86a5831e9aa8], Reason = [Created], Message = [Created container filler-pod-2e654835-6b6c-49d9-bba1-25f1089ef5a0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2e654835-6b6c-49d9-bba1-25f1089ef5a0.162c86a592911113], Reason = [Started], Message = [Started container filler-pod-2e654835-6b6c-49d9-bba1-25f1089ef5a0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dbe4a490-f8cd-487e-9587-ca5a215d6238.162c86a4b81a590f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-483/filler-pod-dbe4a490-f8cd-487e-9587-ca5a215d6238 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dbe4a490-f8cd-487e-9587-ca5a215d6238.162c86a51a007d10], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dbe4a490-f8cd-487e-9587-ca5a215d6238.162c86a567f2d0fc], Reason = [Created], Message = [Created container filler-pod-dbe4a490-f8cd-487e-9587-ca5a215d6238]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dbe4a490-f8cd-487e-9587-ca5a215d6238.162c86a57a829e90], Reason = [Started], Message = [Started container filler-pod-dbe4a490-f8cd-487e-9587-ca5a215d6238]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162c86a5a8269456], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:20:14.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-483" for this suite.
Aug 19 01:20:21.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:20:21.121: INFO: namespace sched-pred-483 deletion completed in 6.170703624s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:11.579 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:20:21.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:20:25.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8749" for this suite.
Aug 19 01:20:31.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:20:31.452: INFO: namespace emptydir-wrapper-8749 deletion completed in 6.136889748s

• [SLOW TEST:10.328 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:20:31.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:20:31.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:20:35.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-530" for this suite.
Aug 19 01:21:15.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:21:15.911: INFO: namespace pods-530 deletion completed in 40.149542103s

• [SLOW TEST:44.459 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:21:15.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:21:16.064: INFO: Create a RollingUpdate DaemonSet
Aug 19 01:21:16.071: INFO: Check that daemon pods launch on every node of the cluster
Aug 19 01:21:16.097: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:16.105: INFO: Number of nodes with available pods: 0
Aug 19 01:21:16.105: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:21:17.117: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:17.125: INFO: Number of nodes with available pods: 0
Aug 19 01:21:17.125: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:21:18.158: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:18.434: INFO: Number of nodes with available pods: 0
Aug 19 01:21:18.434: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:21:19.116: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:19.123: INFO: Number of nodes with available pods: 0
Aug 19 01:21:19.123: INFO: Node iruya-worker is running more than one daemon pod
Aug 19 01:21:20.117: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:20.124: INFO: Number of nodes with available pods: 1
Aug 19 01:21:20.125: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 19 01:21:21.119: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:21.125: INFO: Number of nodes with available pods: 2
Aug 19 01:21:21.125: INFO: Number of running nodes: 2, number of available pods: 2
Aug 19 01:21:21.125: INFO: Update the DaemonSet to trigger a rollout
Aug 19 01:21:21.136: INFO: Updating DaemonSet daemon-set
Aug 19 01:21:24.350: INFO: Roll back the DaemonSet before rollout is complete
Aug 19 01:21:24.359: INFO: Updating DaemonSet daemon-set
Aug 19 01:21:24.360: INFO: Make sure DaemonSet rollback is complete
Aug 19 01:21:24.387: INFO: Wrong image for pod: daemon-set-cwvzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 19 01:21:24.387: INFO: Pod daemon-set-cwvzk is not available
Aug 19 01:21:24.535: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:25.547: INFO: Wrong image for pod: daemon-set-cwvzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 19 01:21:25.548: INFO: Pod daemon-set-cwvzk is not available
Aug 19 01:21:25.554: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:26.544: INFO: Wrong image for pod: daemon-set-cwvzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 19 01:21:26.544: INFO: Pod daemon-set-cwvzk is not available
Aug 19 01:21:26.555: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:27.893: INFO: Wrong image for pod: daemon-set-cwvzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 19 01:21:27.894: INFO: Pod daemon-set-cwvzk is not available
Aug 19 01:21:27.902: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:28.544: INFO: Wrong image for pod: daemon-set-cwvzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 19 01:21:28.544: INFO: Pod daemon-set-cwvzk is not available
Aug 19 01:21:28.553: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 19 01:21:29.558: INFO: Pod daemon-set-rb8d2 is not available
Aug 19 01:21:29.566: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9611, will wait for the garbage collector to delete the pods
Aug 19 01:21:29.637: INFO: Deleting DaemonSet.extensions daemon-set took: 8.254291ms
Aug 19 01:21:29.938: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.841364ms
Aug 19 01:21:32.543: INFO: Number of nodes with available pods: 0
Aug 19 01:21:32.543: INFO: Number of running nodes: 0, number of available pods: 0
Aug 19 01:21:32.547: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9611/daemonsets","resourceVersion":"946460"},"items":null}

Aug 19 01:21:32.550: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9611/pods","resourceVersion":"946460"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:21:32.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9611" for this suite.
Aug 19 01:21:38.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:21:38.729: INFO: namespace daemonsets-9611 deletion completed in 6.151888224s

• [SLOW TEST:22.817 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:21:38.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-5v4s
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 01:21:38.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5v4s" in namespace "subpath-1773" to be "success or failure"
Aug 19 01:21:38.862: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Pending", Reason="", readiness=false. Elapsed: 20.124258ms
Aug 19 01:21:40.868: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026288682s
Aug 19 01:21:42.876: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 4.033323367s
Aug 19 01:21:44.911: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 6.069293174s
Aug 19 01:21:46.918: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 8.075900771s
Aug 19 01:21:48.925: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 10.082575562s
Aug 19 01:21:50.931: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 12.08918774s
Aug 19 01:21:52.938: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 14.095531335s
Aug 19 01:21:54.945: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 16.102337807s
Aug 19 01:21:56.951: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 18.10886066s
Aug 19 01:21:58.958: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 20.115553348s
Aug 19 01:22:00.964: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Running", Reason="", readiness=true. Elapsed: 22.121659603s
Aug 19 01:22:02.984: INFO: Pod "pod-subpath-test-downwardapi-5v4s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.141667646s
STEP: Saw pod success
Aug 19 01:22:02.984: INFO: Pod "pod-subpath-test-downwardapi-5v4s" satisfied condition "success or failure"
Aug 19 01:22:02.989: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-5v4s container test-container-subpath-downwardapi-5v4s: 
STEP: delete the pod
Aug 19 01:22:03.008: INFO: Waiting for pod pod-subpath-test-downwardapi-5v4s to disappear
Aug 19 01:22:03.014: INFO: Pod pod-subpath-test-downwardapi-5v4s no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5v4s
Aug 19 01:22:03.014: INFO: Deleting pod "pod-subpath-test-downwardapi-5v4s" in namespace "subpath-1773"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:22:03.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1773" for this suite.
Aug 19 01:22:09.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:22:09.161: INFO: namespace subpath-1773 deletion completed in 6.137256069s

• [SLOW TEST:30.431 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:22:09.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 19 01:22:09.860: INFO: Waiting up to 5m0s for pod "pod-49e15447-7055-4085-9ec2-1f27de750273" in namespace "emptydir-656" to be "success or failure"
Aug 19 01:22:09.866: INFO: Pod "pod-49e15447-7055-4085-9ec2-1f27de750273": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036907ms
Aug 19 01:22:11.874: INFO: Pod "pod-49e15447-7055-4085-9ec2-1f27de750273": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013329839s
Aug 19 01:22:14.011: INFO: Pod "pod-49e15447-7055-4085-9ec2-1f27de750273": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150929781s
Aug 19 01:22:16.018: INFO: Pod "pod-49e15447-7055-4085-9ec2-1f27de750273": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157934619s
STEP: Saw pod success
Aug 19 01:22:16.019: INFO: Pod "pod-49e15447-7055-4085-9ec2-1f27de750273" satisfied condition "success or failure"
Aug 19 01:22:16.024: INFO: Trying to get logs from node iruya-worker pod pod-49e15447-7055-4085-9ec2-1f27de750273 container test-container: 
STEP: delete the pod
Aug 19 01:22:16.077: INFO: Waiting for pod pod-49e15447-7055-4085-9ec2-1f27de750273 to disappear
Aug 19 01:22:16.081: INFO: Pod pod-49e15447-7055-4085-9ec2-1f27de750273 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:22:16.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-656" for this suite.
Aug 19 01:22:22.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:22:22.233: INFO: namespace emptydir-656 deletion completed in 6.143515904s

• [SLOW TEST:13.069 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:22:22.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0819 01:23:02.458703       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 01:23:02.459: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:23:02.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4426" for this suite.
Aug 19 01:23:16.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:23:16.653: INFO: namespace gc-4426 deletion completed in 14.187484635s

• [SLOW TEST:54.416 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:23:16.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Aug 19 01:23:16.948: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix337586142/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:23:18.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4847" for this suite.
Aug 19 01:23:24.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:23:24.170: INFO: namespace kubectl-4847 deletion completed in 6.160889925s

• [SLOW TEST:7.515 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:23:24.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-qv4m
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 01:23:24.290: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qv4m" in namespace "subpath-117" to be "success or failure"
Aug 19 01:23:24.298: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Pending", Reason="", readiness=false. Elapsed: 7.939212ms
Aug 19 01:23:26.334: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043889636s
Aug 19 01:23:28.341: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050655659s
Aug 19 01:23:30.347: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 6.057010227s
Aug 19 01:23:32.354: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 8.063955495s
Aug 19 01:23:34.361: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 10.071113118s
Aug 19 01:23:36.368: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 12.077263989s
Aug 19 01:23:38.382: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 14.091898161s
Aug 19 01:23:40.392: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 16.102078923s
Aug 19 01:23:42.400: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 18.10958542s
Aug 19 01:23:44.407: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 20.116891947s
Aug 19 01:23:46.414: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 22.123476465s
Aug 19 01:23:48.420: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Running", Reason="", readiness=true. Elapsed: 24.129435134s
Aug 19 01:23:50.426: INFO: Pod "pod-subpath-test-secret-qv4m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.135707025s
STEP: Saw pod success
Aug 19 01:23:50.426: INFO: Pod "pod-subpath-test-secret-qv4m" satisfied condition "success or failure"
Aug 19 01:23:50.431: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-qv4m container test-container-subpath-secret-qv4m: 
STEP: delete the pod
Aug 19 01:23:50.469: INFO: Waiting for pod pod-subpath-test-secret-qv4m to disappear
Aug 19 01:23:50.482: INFO: Pod pod-subpath-test-secret-qv4m no longer exists
STEP: Deleting pod pod-subpath-test-secret-qv4m
Aug 19 01:23:50.483: INFO: Deleting pod "pod-subpath-test-secret-qv4m" in namespace "subpath-117"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:23:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-117" for this suite.
Aug 19 01:23:56.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:23:56.723: INFO: namespace subpath-117 deletion completed in 6.228615834s

• [SLOW TEST:32.548 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:23:56.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 19 01:23:56.907: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:24:09.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1367" for this suite.
Aug 19 01:24:19.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:24:19.870: INFO: namespace init-container-1367 deletion completed in 10.366879632s

• [SLOW TEST:23.145 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:24:19.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-bd4d5884-3769-40d3-83b2-f94803af0cb0 in namespace container-probe-6132
Aug 19 01:24:27.233: INFO: Started pod test-webserver-bd4d5884-3769-40d3-83b2-f94803af0cb0 in namespace container-probe-6132
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 01:24:27.238: INFO: Initial restart count of pod test-webserver-bd4d5884-3769-40d3-83b2-f94803af0cb0 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:28:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6132" for this suite.
Aug 19 01:28:33.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:28:33.743: INFO: namespace container-probe-6132 deletion completed in 6.178363286s

• [SLOW TEST:253.871 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:28:33.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:28:33.805: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 19 01:28:34.913: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:28:34.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8171" for this suite.
Aug 19 01:28:41.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:28:41.541: INFO: namespace replication-controller-8171 deletion completed in 6.55865433s

• [SLOW TEST:7.796 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:28:41.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Aug 19 01:28:41.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 19 01:28:42.954: INFO: stderr: ""
Aug 19 01:28:42.954: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:28:42.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5917" for this suite.
Aug 19 01:28:48.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:28:49.125: INFO: namespace kubectl-5917 deletion completed in 6.162039095s

• [SLOW TEST:7.583 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:28:49.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-8406d9a4-ff7b-47c2-8c4d-7e7587193f7e in namespace container-probe-1079
Aug 19 01:28:53.870: INFO: Started pod busybox-8406d9a4-ff7b-47c2-8c4d-7e7587193f7e in namespace container-probe-1079
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 01:28:53.874: INFO: Initial restart count of pod busybox-8406d9a4-ff7b-47c2-8c4d-7e7587193f7e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:32:55.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1079" for this suite.
Aug 19 01:33:01.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:33:02.061: INFO: namespace container-probe-1079 deletion completed in 6.212435644s

• [SLOW TEST:252.935 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:33:02.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:33:02.153: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-119f8dee-5cef-446c-a4c1-70aeae050ba9
STEP: Creating a pod to test consume secrets
Aug 19 01:33:08.546: INFO: Waiting up to 5m0s for pod "pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306" in namespace "secrets-2443" to be "success or failure"
Aug 19 01:33:08.566: INFO: Pod "pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306": Phase="Pending", Reason="", readiness=false. Elapsed: 19.709499ms
Aug 19 01:33:10.573: INFO: Pod "pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02726751s
Aug 19 01:33:12.580: INFO: Pod "pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034337651s
Aug 19 01:33:14.587: INFO: Pod "pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041346691s
STEP: Saw pod success
Aug 19 01:33:14.588: INFO: Pod "pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306" satisfied condition "success or failure"
Aug 19 01:33:14.594: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306 container secret-volume-test: 
STEP: delete the pod
Aug 19 01:33:14.627: INFO: Waiting for pod pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306 to disappear
Aug 19 01:33:14.640: INFO: Pod pod-secrets-531bfcb6-8293-40e2-8777-f96c7c135306 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:33:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2443" for this suite.
Aug 19 01:33:20.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:33:20.856: INFO: namespace secrets-2443 deletion completed in 6.20774677s

• [SLOW TEST:12.419 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:33:20.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 19 01:33:27.538: INFO: 10 pods remaining
Aug 19 01:33:27.539: INFO: 8 pods has nil DeletionTimestamp
Aug 19 01:33:27.539: INFO: 
Aug 19 01:33:28.486: INFO: 0 pods remaining
Aug 19 01:33:28.486: INFO: 0 pods has nil DeletionTimestamp
Aug 19 01:33:28.486: INFO: 
STEP: Gathering metrics
W0819 01:33:30.278831       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 01:33:30.279: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:33:30.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1269" for this suite.
Aug 19 01:33:38.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:33:38.489: INFO: namespace gc-1269 deletion completed in 8.202978275s

• [SLOW TEST:17.632 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:33:38.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-048ff1c5-b3e2-4e00-a187-8cd3eb02a16a
STEP: Creating a pod to test consume configMaps
Aug 19 01:33:38.579: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c" in namespace "projected-5413" to be "success or failure"
Aug 19 01:33:38.587: INFO: Pod "pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.791375ms
Aug 19 01:33:40.594: INFO: Pod "pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01443292s
Aug 19 01:33:42.795: INFO: Pod "pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c": Phase="Running", Reason="", readiness=true. Elapsed: 4.215814474s
Aug 19 01:33:45.008: INFO: Pod "pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428591882s
STEP: Saw pod success
Aug 19 01:33:45.008: INFO: Pod "pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c" satisfied condition "success or failure"
Aug 19 01:33:45.015: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 01:33:45.248: INFO: Waiting for pod pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c to disappear
Aug 19 01:33:45.252: INFO: Pod pod-projected-configmaps-3ab76d75-ad1d-4c1a-93c6-6546904da54c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:33:45.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5413" for this suite.
Aug 19 01:33:51.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:33:51.572: INFO: namespace projected-5413 deletion completed in 6.311905076s

• [SLOW TEST:13.080 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:33:51.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 19 01:33:51.674: INFO: Waiting up to 5m0s for pod "downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66" in namespace "downward-api-6302" to be "success or failure"
Aug 19 01:33:51.689: INFO: Pod "downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.922657ms
Aug 19 01:33:53.697: INFO: Pod "downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022742821s
Aug 19 01:33:55.726: INFO: Pod "downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052218154s
STEP: Saw pod success
Aug 19 01:33:55.727: INFO: Pod "downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66" satisfied condition "success or failure"
Aug 19 01:33:55.731: INFO: Trying to get logs from node iruya-worker pod downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66 container dapi-container: 
STEP: delete the pod
Aug 19 01:33:55.755: INFO: Waiting for pod downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66 to disappear
Aug 19 01:33:55.759: INFO: Pod downward-api-4af2dbbb-35f8-435b-966b-c0248b258c66 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:33:55.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6302" for this suite.
Aug 19 01:34:01.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:34:02.099: INFO: namespace downward-api-6302 deletion completed in 6.333228942s

• [SLOW TEST:10.525 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:34:02.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-rzch
STEP: Creating a pod to test atomic-volume-subpath
Aug 19 01:34:02.217: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rzch" in namespace "subpath-8773" to be "success or failure"
Aug 19 01:34:02.294: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Pending", Reason="", readiness=false. Elapsed: 76.88645ms
Aug 19 01:34:04.318: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100690887s
Aug 19 01:34:06.696: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 4.478565667s
Aug 19 01:34:08.704: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 6.486460072s
Aug 19 01:34:10.709: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 8.492214075s
Aug 19 01:34:12.717: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 10.499518515s
Aug 19 01:34:14.724: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 12.506610614s
Aug 19 01:34:16.731: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 14.513725985s
Aug 19 01:34:18.738: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 16.521192296s
Aug 19 01:34:20.746: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 18.528413786s
Aug 19 01:34:22.752: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 20.534961368s
Aug 19 01:34:24.758: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Running", Reason="", readiness=true. Elapsed: 22.540776729s
Aug 19 01:34:26.767: INFO: Pod "pod-subpath-test-configmap-rzch": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.549393695s
STEP: Saw pod success
Aug 19 01:34:26.767: INFO: Pod "pod-subpath-test-configmap-rzch" satisfied condition "success or failure"
Aug 19 01:34:26.779: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-rzch container test-container-subpath-configmap-rzch: 
STEP: delete the pod
Aug 19 01:34:26.833: INFO: Waiting for pod pod-subpath-test-configmap-rzch to disappear
Aug 19 01:34:26.934: INFO: Pod pod-subpath-test-configmap-rzch no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rzch
Aug 19 01:34:26.934: INFO: Deleting pod "pod-subpath-test-configmap-rzch" in namespace "subpath-8773"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:34:26.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8773" for this suite.
Aug 19 01:34:35.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:34:35.177: INFO: namespace subpath-8773 deletion completed in 8.231103248s

• [SLOW TEST:33.078 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:34:35.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 19 01:34:35.284: INFO: Waiting up to 5m0s for pod "pod-6ff01bee-7559-4731-b019-8cb048d72c97" in namespace "emptydir-8006" to be "success or failure"
Aug 19 01:34:35.294: INFO: Pod "pod-6ff01bee-7559-4731-b019-8cb048d72c97": Phase="Pending", Reason="", readiness=false. Elapsed: 9.770137ms
Aug 19 01:34:37.300: INFO: Pod "pod-6ff01bee-7559-4731-b019-8cb048d72c97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015035387s
Aug 19 01:34:39.450: INFO: Pod "pod-6ff01bee-7559-4731-b019-8cb048d72c97": Phase="Running", Reason="", readiness=true. Elapsed: 4.165826494s
Aug 19 01:34:41.456: INFO: Pod "pod-6ff01bee-7559-4731-b019-8cb048d72c97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17170482s
STEP: Saw pod success
Aug 19 01:34:41.456: INFO: Pod "pod-6ff01bee-7559-4731-b019-8cb048d72c97" satisfied condition "success or failure"
Aug 19 01:34:41.460: INFO: Trying to get logs from node iruya-worker2 pod pod-6ff01bee-7559-4731-b019-8cb048d72c97 container test-container: 
STEP: delete the pod
Aug 19 01:34:41.535: INFO: Waiting for pod pod-6ff01bee-7559-4731-b019-8cb048d72c97 to disappear
Aug 19 01:34:41.588: INFO: Pod pod-6ff01bee-7559-4731-b019-8cb048d72c97 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:34:41.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8006" for this suite.
Aug 19 01:34:47.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:34:47.856: INFO: namespace emptydir-8006 deletion completed in 6.256770987s

• [SLOW TEST:12.678 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:34:47.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 01:34:47.944: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f" in namespace "downward-api-3270" to be "success or failure"
Aug 19 01:34:47.953: INFO: Pod "downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.82367ms
Aug 19 01:34:49.960: INFO: Pod "downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015664902s
Aug 19 01:34:51.967: INFO: Pod "downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022552526s
STEP: Saw pod success
Aug 19 01:34:51.967: INFO: Pod "downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f" satisfied condition "success or failure"
Aug 19 01:34:51.972: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f container client-container: 
STEP: delete the pod
Aug 19 01:34:52.155: INFO: Waiting for pod downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f to disappear
Aug 19 01:34:52.199: INFO: Pod downwardapi-volume-52cd5cf7-9603-4b2f-9028-d2907594e20f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:34:52.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3270" for this suite.
Aug 19 01:34:58.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:34:58.387: INFO: namespace downward-api-3270 deletion completed in 6.177272408s

• [SLOW TEST:10.530 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:34:58.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-4a901548-522b-4a4e-b5da-90213f8a6308 in namespace container-probe-4784
Aug 19 01:35:02.546: INFO: Started pod liveness-4a901548-522b-4a4e-b5da-90213f8a6308 in namespace container-probe-4784
STEP: checking the pod's current state and verifying that restartCount is present
Aug 19 01:35:02.551: INFO: Initial restart count of pod liveness-4a901548-522b-4a4e-b5da-90213f8a6308 is 0
Aug 19 01:35:25.218: INFO: Restart count of pod container-probe-4784/liveness-4a901548-522b-4a4e-b5da-90213f8a6308 is now 1 (22.666727918s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:35:25.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4784" for this suite.
Aug 19 01:35:32.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:35:32.377: INFO: namespace container-probe-4784 deletion completed in 6.898935237s

• [SLOW TEST:33.990 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:35:32.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 19 01:35:32.907: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 19 01:35:32.957: INFO: Waiting for terminating namespaces to be deleted...
Aug 19 01:35:32.962: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 19 01:35:32.973: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:35:32.973: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 19 01:35:32.973: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:35:32.973: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 01:35:32.973: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 19 01:35:33.149: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:35:33.149: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 19 01:35:33.149: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 19 01:35:33.149: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c17c454a-5226-4527-925a-c271965f3bfd 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c17c454a-5226-4527-925a-c271965f3bfd off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c17c454a-5226-4527-925a-c271965f3bfd
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:35:46.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-816" for this suite.
Aug 19 01:35:58.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:35:58.305: INFO: namespace sched-pred-816 deletion completed in 12.259854685s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:25.927 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:35:58.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 1 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0819 01:35:59.063593       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 19 01:35:59.064: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:35:59.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1971" for this suite.
Aug 19 01:36:07.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:36:07.707: INFO: namespace gc-1971 deletion completed in 8.635977697s

• [SLOW TEST:9.400 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:36:07.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug 19 01:36:08.054: INFO: Waiting up to 5m0s for pod "var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af" in namespace "var-expansion-7371" to be "success or failure"
Aug 19 01:36:08.081: INFO: Pod "var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af": Phase="Pending", Reason="", readiness=false. Elapsed: 27.357718ms
Aug 19 01:36:10.087: INFO: Pod "var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033511504s
Aug 19 01:36:12.095: INFO: Pod "var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040753682s
Aug 19 01:36:14.102: INFO: Pod "var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047788214s
STEP: Saw pod success
Aug 19 01:36:14.102: INFO: Pod "var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af" satisfied condition "success or failure"
Aug 19 01:36:14.106: INFO: Trying to get logs from node iruya-worker pod var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af container dapi-container: 
STEP: delete the pod
Aug 19 01:36:14.267: INFO: Waiting for pod var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af to disappear
Aug 19 01:36:14.284: INFO: Pod var-expansion-cc5aa13e-3956-4dac-a481-2241a7ba45af no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:36:14.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7371" for this suite.
Aug 19 01:36:20.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:36:20.450: INFO: namespace var-expansion-7371 deletion completed in 6.15775978s

• [SLOW TEST:12.740 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:36:20.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 19 01:36:43.301: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 01:36:43.421: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 01:36:45.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 01:36:45.429: INFO: Pod pod-with-poststart-http-hook still exists
Aug 19 01:36:47.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 19 01:36:47.512: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:36:47.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2018" for this suite.
Aug 19 01:37:11.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:37:12.020: INFO: namespace container-lifecycle-hook-2018 deletion completed in 24.499309057s

• [SLOW TEST:51.567 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:37:12.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-63d3df1d-a8ed-4318-8a89-53c94b2263cf
STEP: Creating a pod to test consume secrets
Aug 19 01:37:12.633: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b" in namespace "projected-7071" to be "success or failure"
Aug 19 01:37:12.640: INFO: Pod "pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.670203ms
Aug 19 01:37:14.647: INFO: Pod "pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013822904s
Aug 19 01:37:16.654: INFO: Pod "pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020647708s
Aug 19 01:37:19.159: INFO: Pod "pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.525342246s
Aug 19 01:37:21.166: INFO: Pod "pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.532141241s
STEP: Saw pod success
Aug 19 01:37:21.166: INFO: Pod "pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b" satisfied condition "success or failure"
Aug 19 01:37:21.334: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 01:37:21.384: INFO: Waiting for pod pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b to disappear
Aug 19 01:37:21.824: INFO: Pod pod-projected-secrets-b1bad234-8c7e-4755-ac74-0fcf0b8f180b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:37:21.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7071" for this suite.
Aug 19 01:37:30.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:37:30.434: INFO: namespace projected-7071 deletion completed in 8.516424805s

• [SLOW TEST:18.413 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:37:30.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-0b647a24-7453-46d7-b5c6-544166e3f180
STEP: Creating configMap with name cm-test-opt-upd-9f312f48-4cce-4142-a604-50593fb3df71
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0b647a24-7453-46d7-b5c6-544166e3f180
STEP: Updating configmap cm-test-opt-upd-9f312f48-4cce-4142-a604-50593fb3df71
STEP: Creating configMap with name cm-test-opt-create-cc415aab-8530-46fd-b2b2-d20a987be2c6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:37:47.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7237" for this suite.
Aug 19 01:38:13.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:38:13.909: INFO: namespace projected-7237 deletion completed in 26.149403086s

• [SLOW TEST:43.473 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:38:13.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7785.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7785.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7785.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7785.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 01:38:22.159: INFO: DNS probes using dns-test-d57faffa-f253-4851-b788-2aa58b7ccb57 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7785.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7785.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7785.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7785.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 01:38:42.377: INFO: File wheezy_udp@dns-test-service-3.dns-7785.svc.cluster.local from pod  dns-7785/dns-test-f398302d-b7ef-435e-ae77-e9a792b96266 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 01:38:42.382: INFO: File jessie_udp@dns-test-service-3.dns-7785.svc.cluster.local from pod  dns-7785/dns-test-f398302d-b7ef-435e-ae77-e9a792b96266 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 01:38:42.382: INFO: Lookups using dns-7785/dns-test-f398302d-b7ef-435e-ae77-e9a792b96266 failed for: [wheezy_udp@dns-test-service-3.dns-7785.svc.cluster.local jessie_udp@dns-test-service-3.dns-7785.svc.cluster.local]

Aug 19 01:38:47.390: INFO: File wheezy_udp@dns-test-service-3.dns-7785.svc.cluster.local from pod  dns-7785/dns-test-f398302d-b7ef-435e-ae77-e9a792b96266 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 01:38:47.397: INFO: File jessie_udp@dns-test-service-3.dns-7785.svc.cluster.local from pod  dns-7785/dns-test-f398302d-b7ef-435e-ae77-e9a792b96266 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 19 01:38:47.397: INFO: Lookups using dns-7785/dns-test-f398302d-b7ef-435e-ae77-e9a792b96266 failed for: [wheezy_udp@dns-test-service-3.dns-7785.svc.cluster.local jessie_udp@dns-test-service-3.dns-7785.svc.cluster.local]

Aug 19 01:38:52.558: INFO: DNS probes using dns-test-f398302d-b7ef-435e-ae77-e9a792b96266 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7785.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7785.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7785.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7785.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 19 01:39:10.386: INFO: DNS probes using dns-test-d19d59cb-1a3d-4597-bb94-f2c808c5fba7 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:39:10.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7785" for this suite.
Aug 19 01:39:26.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:39:27.053: INFO: namespace dns-7785 deletion completed in 16.356004672s

• [SLOW TEST:73.143 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:39:27.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-5c5fccf0-36d2-45a7-9376-e3c8afb9960e
STEP: Creating a pod to test consume configMaps
Aug 19 01:39:27.137: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374" in namespace "projected-2822" to be "success or failure"
Aug 19 01:39:27.222: INFO: Pod "pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374": Phase="Pending", Reason="", readiness=false. Elapsed: 85.161083ms
Aug 19 01:39:29.228: INFO: Pod "pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091443312s
Aug 19 01:39:31.454: INFO: Pod "pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317279134s
Aug 19 01:39:33.586: INFO: Pod "pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374": Phase="Running", Reason="", readiness=true. Elapsed: 6.449324814s
Aug 19 01:39:35.593: INFO: Pod "pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.456432314s
STEP: Saw pod success
Aug 19 01:39:35.594: INFO: Pod "pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374" satisfied condition "success or failure"
Aug 19 01:39:35.604: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 01:39:35.623: INFO: Waiting for pod pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374 to disappear
Aug 19 01:39:35.681: INFO: Pod pod-projected-configmaps-2cf12206-744c-4304-974c-e2ca77492374 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:39:35.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2822" for this suite.
Aug 19 01:39:41.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:39:41.812: INFO: namespace projected-2822 deletion completed in 6.121534546s

• [SLOW TEST:14.758 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:39:41.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 19 01:39:41.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5504'
Aug 19 01:39:48.453: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 19 01:39:48.453: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug 19 01:39:48.507: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-v4wtj]
Aug 19 01:39:48.507: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-v4wtj" in namespace "kubectl-5504" to be "running and ready"
Aug 19 01:39:48.511: INFO: Pod "e2e-test-nginx-rc-v4wtj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.732086ms
Aug 19 01:39:50.518: INFO: Pod "e2e-test-nginx-rc-v4wtj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011114941s
Aug 19 01:39:52.526: INFO: Pod "e2e-test-nginx-rc-v4wtj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018404254s
Aug 19 01:39:54.533: INFO: Pod "e2e-test-nginx-rc-v4wtj": Phase="Running", Reason="", readiness=true. Elapsed: 6.025459893s
Aug 19 01:39:54.533: INFO: Pod "e2e-test-nginx-rc-v4wtj" satisfied condition "running and ready"
Aug 19 01:39:54.533: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-v4wtj]
Aug 19 01:39:54.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5504'
Aug 19 01:39:56.257: INFO: stderr: ""
Aug 19 01:39:56.257: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Aug 19 01:39:56.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5504'
Aug 19 01:39:57.529: INFO: stderr: ""
Aug 19 01:39:57.529: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:39:57.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5504" for this suite.
Aug 19 01:40:17.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:40:17.689: INFO: namespace kubectl-5504 deletion completed in 20.152348125s

• [SLOW TEST:35.874 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:40:17.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 01:40:17.762: INFO: Waiting up to 5m0s for pod "downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b" in namespace "projected-3030" to be "success or failure"
Aug 19 01:40:17.791: INFO: Pod "downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.871467ms
Aug 19 01:40:19.797: INFO: Pod "downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034496221s
Aug 19 01:40:21.862: INFO: Pod "downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099987391s
Aug 19 01:40:23.870: INFO: Pod "downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107655892s
STEP: Saw pod success
Aug 19 01:40:23.870: INFO: Pod "downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b" satisfied condition "success or failure"
Aug 19 01:40:23.875: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b container client-container: 
STEP: delete the pod
Aug 19 01:40:23.960: INFO: Waiting for pod downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b to disappear
Aug 19 01:40:24.049: INFO: Pod downwardapi-volume-965c6c0f-8af9-4512-974c-de0ed828b01b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:40:24.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3030" for this suite.
Aug 19 01:40:32.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:40:32.282: INFO: namespace projected-3030 deletion completed in 8.225953178s

• [SLOW TEST:14.592 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:40:32.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7031/configmap-test-2db053bf-9e22-4f0d-b124-fa16e29bf60b
STEP: Creating a pod to test consume configMaps
Aug 19 01:40:33.699: INFO: Waiting up to 5m0s for pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13" in namespace "configmap-7031" to be "success or failure"
Aug 19 01:40:34.873: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Pending", Reason="", readiness=false. Elapsed: 1.173706242s
Aug 19 01:40:36.878: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.179329649s
Aug 19 01:40:38.899: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Pending", Reason="", readiness=false. Elapsed: 5.200250313s
Aug 19 01:40:41.078: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Pending", Reason="", readiness=false. Elapsed: 7.379507955s
Aug 19 01:40:43.083: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Pending", Reason="", readiness=false. Elapsed: 9.38432201s
Aug 19 01:40:45.180: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Pending", Reason="", readiness=false. Elapsed: 11.480912818s
Aug 19 01:40:47.239: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Running", Reason="", readiness=true. Elapsed: 13.539997131s
Aug 19 01:40:49.244: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.545586012s
STEP: Saw pod success
Aug 19 01:40:49.245: INFO: Pod "pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13" satisfied condition "success or failure"
Aug 19 01:40:49.474: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13 container env-test: 
STEP: delete the pod
Aug 19 01:40:49.561: INFO: Waiting for pod pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13 to disappear
Aug 19 01:40:49.701: INFO: Pod pod-configmaps-68a06094-5fe8-4598-b696-21f8cefb8e13 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:40:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7031" for this suite.
Aug 19 01:40:57.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:40:57.895: INFO: namespace configmap-7031 deletion completed in 8.184169739s

• [SLOW TEST:25.611 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:40:57.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:41:08.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7132" for this suite.
Aug 19 01:41:54.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:41:55.086: INFO: namespace kubelet-test-7132 deletion completed in 46.30356374s

• [SLOW TEST:57.190 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:41:55.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 19 01:42:01.889: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-472 pod-service-account-e80c9600-3a90-4aac-9e1e-98b6cf6d5663 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 19 01:42:03.331: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-472 pod-service-account-e80c9600-3a90-4aac-9e1e-98b6cf6d5663 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 19 01:42:04.836: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-472 pod-service-account-e80c9600-3a90-4aac-9e1e-98b6cf6d5663 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:42:06.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-472" for this suite.
Aug 19 01:42:12.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:42:13.067: INFO: namespace svcaccounts-472 deletion completed in 6.159516008s

• [SLOW TEST:17.981 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:42:13.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 01:42:13.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e" in namespace "projected-5284" to be "success or failure"
Aug 19 01:42:13.303: INFO: Pod "downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 51.923304ms
Aug 19 01:42:15.309: INFO: Pod "downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057381479s
Aug 19 01:42:17.315: INFO: Pod "downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063472401s
Aug 19 01:42:19.321: INFO: Pod "downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069632905s
STEP: Saw pod success
Aug 19 01:42:19.321: INFO: Pod "downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e" satisfied condition "success or failure"
Aug 19 01:42:19.325: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e container client-container: 
STEP: delete the pod
Aug 19 01:42:19.364: INFO: Waiting for pod downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e to disappear
Aug 19 01:42:19.380: INFO: Pod downwardapi-volume-4b1e1167-86cc-4073-b316-fa92f83c8e6e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:42:19.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5284" for this suite.
Aug 19 01:42:25.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:42:25.570: INFO: namespace projected-5284 deletion completed in 6.184457747s

• [SLOW TEST:12.503 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:42:25.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ff549c40-3035-4365-9ef5-41b24cd68f80
STEP: Creating a pod to test consume secrets
Aug 19 01:42:25.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59" in namespace "projected-6366" to be "success or failure"
Aug 19 01:42:25.764: INFO: Pod "pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59": Phase="Pending", Reason="", readiness=false. Elapsed: 46.352769ms
Aug 19 01:42:27.838: INFO: Pod "pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120318174s
Aug 19 01:42:30.019: INFO: Pod "pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302249779s
Aug 19 01:42:32.026: INFO: Pod "pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.308733233s
STEP: Saw pod success
Aug 19 01:42:32.026: INFO: Pod "pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59" satisfied condition "success or failure"
Aug 19 01:42:32.176: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59 container projected-secret-volume-test: 
STEP: delete the pod
Aug 19 01:42:32.241: INFO: Waiting for pod pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59 to disappear
Aug 19 01:42:32.360: INFO: Pod pod-projected-secrets-30073ad6-6a95-4d31-a675-298fff136a59 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:42:32.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6366" for this suite.
Aug 19 01:42:38.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:42:38.530: INFO: namespace projected-6366 deletion completed in 6.16119838s

• [SLOW TEST:12.959 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:42:38.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Aug 19 01:42:38.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8372'
Aug 19 01:42:40.737: INFO: stderr: ""
Aug 19 01:42:40.737: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 01:42:40.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8372'
Aug 19 01:42:42.326: INFO: stderr: ""
Aug 19 01:42:42.326: INFO: stdout: "update-demo-nautilus-8mkj5 update-demo-nautilus-fqswz "
Aug 19 01:42:42.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mkj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:42:43.962: INFO: stderr: ""
Aug 19 01:42:43.962: INFO: stdout: ""
Aug 19 01:42:43.962: INFO: update-demo-nautilus-8mkj5 is created but not running
Aug 19 01:42:48.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8372'
Aug 19 01:42:50.520: INFO: stderr: ""
Aug 19 01:42:50.520: INFO: stdout: "update-demo-nautilus-8mkj5 update-demo-nautilus-fqswz "
Aug 19 01:42:50.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mkj5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:42:51.851: INFO: stderr: ""
Aug 19 01:42:51.852: INFO: stdout: "true"
Aug 19 01:42:51.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mkj5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:42:53.182: INFO: stderr: ""
Aug 19 01:42:53.182: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 01:42:53.182: INFO: validating pod update-demo-nautilus-8mkj5
Aug 19 01:42:53.188: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 01:42:53.189: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 01:42:53.189: INFO: update-demo-nautilus-8mkj5 is verified up and running
Aug 19 01:42:53.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqswz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:42:54.440: INFO: stderr: ""
Aug 19 01:42:54.440: INFO: stdout: "true"
Aug 19 01:42:54.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqswz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:42:55.797: INFO: stderr: ""
Aug 19 01:42:55.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 01:42:55.797: INFO: validating pod update-demo-nautilus-fqswz
Aug 19 01:42:55.802: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 01:42:55.802: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 01:42:55.802: INFO: update-demo-nautilus-fqswz is verified up and running
STEP: rolling-update to new replication controller
Aug 19 01:42:55.943: INFO: scanned /root for discovery docs: 
Aug 19 01:42:55.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8372'
Aug 19 01:43:25.884: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 19 01:43:25.884: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 01:43:25.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8372'
Aug 19 01:43:27.267: INFO: stderr: ""
Aug 19 01:43:27.267: INFO: stdout: "update-demo-kitten-c5qk7 update-demo-kitten-w6zjz "
Aug 19 01:43:27.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c5qk7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:43:28.647: INFO: stderr: ""
Aug 19 01:43:28.648: INFO: stdout: "true"
Aug 19 01:43:28.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c5qk7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:43:30.075: INFO: stderr: ""
Aug 19 01:43:30.076: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 19 01:43:30.076: INFO: validating pod update-demo-kitten-c5qk7
Aug 19 01:43:30.082: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 19 01:43:30.082: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 19 01:43:30.082: INFO: update-demo-kitten-c5qk7 is verified up and running
Aug 19 01:43:30.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w6zjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:43:31.373: INFO: stderr: ""
Aug 19 01:43:31.373: INFO: stdout: "true"
Aug 19 01:43:31.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w6zjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8372'
Aug 19 01:43:32.624: INFO: stderr: ""
Aug 19 01:43:32.624: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 19 01:43:32.625: INFO: validating pod update-demo-kitten-w6zjz
Aug 19 01:43:32.631: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 19 01:43:32.631: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 19 01:43:32.631: INFO: update-demo-kitten-w6zjz is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:43:32.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8372" for this suite.
Aug 19 01:44:00.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:44:00.792: INFO: namespace kubectl-8372 deletion completed in 28.153143123s

• [SLOW TEST:82.262 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:44:00.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 19 01:44:00.841: INFO: namespace kubectl-2413
Aug 19 01:44:00.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2413'
Aug 19 01:44:02.661: INFO: stderr: ""
Aug 19 01:44:02.661: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 19 01:44:03.791: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:03.792: INFO: Found 0 / 1
Aug 19 01:44:04.669: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:04.670: INFO: Found 0 / 1
Aug 19 01:44:05.949: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:05.949: INFO: Found 0 / 1
Aug 19 01:44:06.995: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:06.995: INFO: Found 0 / 1
Aug 19 01:44:07.668: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:07.668: INFO: Found 0 / 1
Aug 19 01:44:08.685: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:08.685: INFO: Found 0 / 1
Aug 19 01:44:09.667: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:09.667: INFO: Found 1 / 1
Aug 19 01:44:09.668: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 19 01:44:09.672: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:44:09.673: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 19 01:44:09.673: INFO: wait on redis-master startup in kubectl-2413 
Aug 19 01:44:09.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xx8bc redis-master --namespace=kubectl-2413'
Aug 19 01:44:10.980: INFO: stderr: ""
Aug 19 01:44:10.980: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Aug 01:44:07.626 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Aug 01:44:08.350 # Server started, Redis version 3.2.12\n1:M 19 Aug 01:44:08.350 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Aug 01:44:08.350 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 19 01:44:10.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2413'
Aug 19 01:44:12.363: INFO: stderr: ""
Aug 19 01:44:12.363: INFO: stdout: "service/rm2 exposed\n"
Aug 19 01:44:12.392: INFO: Service rm2 in namespace kubectl-2413 found.
STEP: exposing service
Aug 19 01:44:14.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2413'
Aug 19 01:44:15.791: INFO: stderr: ""
Aug 19 01:44:15.792: INFO: stdout: "service/rm3 exposed\n"
Aug 19 01:44:15.809: INFO: Service rm3 in namespace kubectl-2413 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:44:17.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2413" for this suite.
Aug 19 01:44:39.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:44:39.990: INFO: namespace kubectl-2413 deletion completed in 22.162621533s

• [SLOW TEST:39.196 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:44:39.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 19 01:44:40.078: INFO: Waiting up to 5m0s for pod "pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac" in namespace "emptydir-1324" to be "success or failure"
Aug 19 01:44:40.083: INFO: Pod "pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393247ms
Aug 19 01:44:42.091: INFO: Pod "pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011856776s
Aug 19 01:44:44.098: INFO: Pod "pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019419881s
Aug 19 01:44:46.105: INFO: Pod "pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026115275s
STEP: Saw pod success
Aug 19 01:44:46.105: INFO: Pod "pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac" satisfied condition "success or failure"
Aug 19 01:44:46.110: INFO: Trying to get logs from node iruya-worker pod pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac container test-container: 
STEP: delete the pod
Aug 19 01:44:46.145: INFO: Waiting for pod pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac to disappear
Aug 19 01:44:46.157: INFO: Pod pod-8de7f50f-9f66-4b7f-805d-2d2ae3b07fac no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:44:46.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1324" for this suite.
Aug 19 01:44:52.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:44:52.327: INFO: namespace emptydir-1324 deletion completed in 6.159440268s

• [SLOW TEST:12.336 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:44:52.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 19 01:44:52.474: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3953,SelfLink:/api/v1/namespaces/watch-3953/configmaps/e2e-watch-test-resource-version,UID:7b19f246-5276-4b0c-a96d-77ae74d32ecd,ResourceVersion:950676,Generation:0,CreationTimestamp:2020-08-19 01:44:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 19 01:44:52.475: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3953,SelfLink:/api/v1/namespaces/watch-3953/configmaps/e2e-watch-test-resource-version,UID:7b19f246-5276-4b0c-a96d-77ae74d32ecd,ResourceVersion:950678,Generation:0,CreationTimestamp:2020-08-19 01:44:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:44:52.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3953" for this suite.
Aug 19 01:44:58.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:44:58.669: INFO: namespace watch-3953 deletion completed in 6.180730379s

• [SLOW TEST:6.341 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:44:58.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 19 01:45:06.078: INFO: Successfully updated pod "pod-update-activedeadlineseconds-92936501-357e-40c5-b160-5e6759002a27"
Aug 19 01:45:06.078: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-92936501-357e-40c5-b160-5e6759002a27" in namespace "pods-6459" to be "terminated due to deadline exceeded"
Aug 19 01:45:06.404: INFO: Pod "pod-update-activedeadlineseconds-92936501-357e-40c5-b160-5e6759002a27": Phase="Running", Reason="", readiness=true. Elapsed: 325.016809ms
Aug 19 01:45:08.413: INFO: Pod "pod-update-activedeadlineseconds-92936501-357e-40c5-b160-5e6759002a27": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.334949778s
Aug 19 01:45:08.414: INFO: Pod "pod-update-activedeadlineseconds-92936501-357e-40c5-b160-5e6759002a27" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:45:08.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6459" for this suite.
Aug 19 01:45:14.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:45:14.603: INFO: namespace pods-6459 deletion completed in 6.182084898s

• [SLOW TEST:15.932 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:45:14.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 01:45:14.669: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54" in namespace "projected-5518" to be "success or failure"
Aug 19 01:45:14.691: INFO: Pod "downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54": Phase="Pending", Reason="", readiness=false. Elapsed: 21.507827ms
Aug 19 01:45:16.698: INFO: Pod "downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028553807s
Aug 19 01:45:18.704: INFO: Pod "downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035063591s
STEP: Saw pod success
Aug 19 01:45:18.704: INFO: Pod "downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54" satisfied condition "success or failure"
Aug 19 01:45:18.710: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54 container client-container: 
STEP: delete the pod
Aug 19 01:45:18.746: INFO: Waiting for pod downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54 to disappear
Aug 19 01:45:18.760: INFO: Pod downwardapi-volume-e4530f8d-8ef7-40d5-956a-b9565550de54 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:45:18.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5518" for this suite.
Aug 19 01:45:24.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:45:24.940: INFO: namespace projected-5518 deletion completed in 6.170908445s

• [SLOW TEST:10.334 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:45:24.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 19 01:45:31.625: INFO: Successfully updated pod "pod-update-adccb6ec-7618-4a9f-ad61-8530be6e4a84"
STEP: verifying the updated pod is in kubernetes
Aug 19 01:45:31.654: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:45:31.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7954" for this suite.
Aug 19 01:45:55.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:45:55.851: INFO: namespace pods-7954 deletion completed in 24.188110289s

• [SLOW TEST:30.906 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:45:55.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 19 01:45:55.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2505'
Aug 19 01:45:57.582: INFO: stderr: ""
Aug 19 01:45:57.582: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 19 01:45:57.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2505'
Aug 19 01:45:59.254: INFO: stderr: ""
Aug 19 01:45:59.254: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 19 01:46:00.420: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:46:00.421: INFO: Found 0 / 1
Aug 19 01:46:01.261: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:46:01.261: INFO: Found 0 / 1
Aug 19 01:46:02.262: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:46:02.262: INFO: Found 0 / 1
Aug 19 01:46:03.262: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:46:03.262: INFO: Found 1 / 1
Aug 19 01:46:03.262: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 19 01:46:03.267: INFO: Selector matched 1 pods for map[app:redis]
Aug 19 01:46:03.268: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 19 01:46:03.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-96f69 --namespace=kubectl-2505'
Aug 19 01:46:04.684: INFO: stderr: ""
Aug 19 01:46:04.684: INFO: stdout: "Name:           redis-master-96f69\nNamespace:      kubectl-2505\nPriority:       0\nNode:           iruya-worker/172.18.0.9\nStart Time:     Wed, 19 Aug 2020 01:45:57 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.143\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://b536ea1f3ef82d433bbce80895f87a9515e4eacb7d3c059febf3fb23b684c93f\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 19 Aug 2020 01:46:02 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zjpgb (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-zjpgb:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-zjpgb\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  7s    default-scheduler      Successfully assigned kubectl-2505/redis-master-96f69 to iruya-worker\n  Normal  Pulled     6s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-worker  Started container redis-master\n"
Aug 19 01:46:04.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2505'
Aug 19 01:46:06.201: INFO: stderr: ""
Aug 19 01:46:06.201: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2505\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-96f69\n"
Aug 19 01:46:06.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2505'
Aug 19 01:46:07.548: INFO: stderr: ""
Aug 19 01:46:07.548: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2505\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.109.115.6\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.143:6379\nSession Affinity:  None\nEvents:            \n"
Aug 19 01:46:07.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug 19 01:46:10.376: INFO: stderr: ""
Aug 19 01:46:10.376: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:34:51 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 19 Aug 2020 01:45:09 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 19 Aug 2020 01:45:09 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 19 Aug 2020 01:45:09 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 19 Aug 2020 01:45:09 +0000   Sat, 15 Aug 2020 09:35:31 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.7\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 3ed9130db08840259d2231bd97220883\n System UUID:                e52cc602-b019-45cd-b06f-235cc5705532\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-6krdd                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d16h\n  kube-system                coredns-5d4dd4b4db-htp88                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d16h\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kindnet-gvnsh                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      3d16h\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kube-proxy-ndl9h                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d16h\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         3d16h\n  local-path-storage         local-path-provisioner-668779bd7-g227z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d16h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 19 01:46:10.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2505'
Aug 19 01:46:11.788: INFO: stderr: ""
Aug 19 01:46:11.788: INFO: stdout: "Name:         kubectl-2505\nLabels:       e2e-framework=kubectl\n              e2e-run=ca6273c1-46a0-431c-a61b-38060cf317b2\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:46:11.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2505" for this suite.
Aug 19 01:46:33.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:46:34.018: INFO: namespace kubectl-2505 deletion completed in 22.221147001s

• [SLOW TEST:38.166 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:46:34.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 19 01:46:34.262: INFO: Waiting up to 5m0s for pod "downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b" in namespace "projected-9496" to be "success or failure"
Aug 19 01:46:34.295: INFO: Pod "downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.072976ms
Aug 19 01:46:36.301: INFO: Pod "downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038861553s
Aug 19 01:46:38.307: INFO: Pod "downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b": Phase="Running", Reason="", readiness=true. Elapsed: 4.04491478s
Aug 19 01:46:40.314: INFO: Pod "downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052088969s
STEP: Saw pod success
Aug 19 01:46:40.314: INFO: Pod "downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b" satisfied condition "success or failure"
Aug 19 01:46:40.383: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b container client-container: 
STEP: delete the pod
Aug 19 01:46:40.417: INFO: Waiting for pod downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b to disappear
Aug 19 01:46:40.432: INFO: Pod downwardapi-volume-195ab8f7-7fe9-41be-aad5-2ba004da576b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:46:40.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9496" for this suite.
Aug 19 01:46:46.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:46:46.618: INFO: namespace projected-9496 deletion completed in 6.176718541s

• [SLOW TEST:12.597 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:46:46.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e518ca50-a348-42db-971e-cb091ea3ac0c
STEP: Creating a pod to test consume configMaps
Aug 19 01:46:46.750: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc" in namespace "projected-9591" to be "success or failure"
Aug 19 01:46:46.768: INFO: Pod "pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.054046ms
Aug 19 01:46:48.953: INFO: Pod "pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201987285s
Aug 19 01:46:50.959: INFO: Pod "pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208475401s
Aug 19 01:46:53.038: INFO: Pod "pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286963241s
STEP: Saw pod success
Aug 19 01:46:53.038: INFO: Pod "pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc" satisfied condition "success or failure"
Aug 19 01:46:53.169: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc container projected-configmap-volume-test: 
STEP: delete the pod
Aug 19 01:46:53.262: INFO: Waiting for pod pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc to disappear
Aug 19 01:46:53.455: INFO: Pod pod-projected-configmaps-0a1daff6-5bf3-4a63-bfba-e3814dd6b0fc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:46:53.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9591" for this suite.
Aug 19 01:46:59.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:46:59.646: INFO: namespace projected-9591 deletion completed in 6.183445729s

• [SLOW TEST:13.026 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 19 01:46:59.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 19 01:46:59.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3126'
Aug 19 01:47:01.549: INFO: stderr: ""
Aug 19 01:47:01.549: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 01:47:01.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Aug 19 01:47:02.866: INFO: stderr: ""
Aug 19 01:47:02.866: INFO: stdout: "update-demo-nautilus-b2hw9 update-demo-nautilus-vbpvj "
Aug 19 01:47:02.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2hw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:04.150: INFO: stderr: ""
Aug 19 01:47:04.150: INFO: stdout: ""
Aug 19 01:47:04.150: INFO: update-demo-nautilus-b2hw9 is created but not running
Aug 19 01:47:09.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Aug 19 01:47:10.461: INFO: stderr: ""
Aug 19 01:47:10.461: INFO: stdout: "update-demo-nautilus-b2hw9 update-demo-nautilus-vbpvj "
Aug 19 01:47:10.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2hw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:11.765: INFO: stderr: ""
Aug 19 01:47:11.765: INFO: stdout: "true"
Aug 19 01:47:11.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2hw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:13.055: INFO: stderr: ""
Aug 19 01:47:13.055: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 01:47:13.056: INFO: validating pod update-demo-nautilus-b2hw9
Aug 19 01:47:13.061: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 01:47:13.061: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 01:47:13.061: INFO: update-demo-nautilus-b2hw9 is verified up and running
Aug 19 01:47:13.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbpvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:14.351: INFO: stderr: ""
Aug 19 01:47:14.352: INFO: stdout: "true"
Aug 19 01:47:14.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbpvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:15.639: INFO: stderr: ""
Aug 19 01:47:15.639: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 01:47:15.639: INFO: validating pod update-demo-nautilus-vbpvj
Aug 19 01:47:15.645: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 01:47:15.646: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 01:47:15.646: INFO: update-demo-nautilus-vbpvj is verified up and running
STEP: scaling down the replication controller
Aug 19 01:47:15.651: INFO: scanned /root for discovery docs: 
Aug 19 01:47:15.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3126'
Aug 19 01:47:18.052: INFO: stderr: ""
Aug 19 01:47:18.052: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 01:47:18.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Aug 19 01:47:19.369: INFO: stderr: ""
Aug 19 01:47:19.370: INFO: stdout: "update-demo-nautilus-b2hw9 update-demo-nautilus-vbpvj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 19 01:47:24.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Aug 19 01:47:25.718: INFO: stderr: ""
Aug 19 01:47:25.718: INFO: stdout: "update-demo-nautilus-vbpvj "
Aug 19 01:47:25.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbpvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:26.972: INFO: stderr: ""
Aug 19 01:47:26.972: INFO: stdout: "true"
Aug 19 01:47:26.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbpvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:28.288: INFO: stderr: ""
Aug 19 01:47:28.288: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 01:47:28.288: INFO: validating pod update-demo-nautilus-vbpvj
Aug 19 01:47:28.294: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 01:47:28.294: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 01:47:28.294: INFO: update-demo-nautilus-vbpvj is verified up and running
STEP: scaling up the replication controller
Aug 19 01:47:28.301: INFO: scanned /root for discovery docs: 
Aug 19 01:47:28.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3126'
Aug 19 01:47:30.722: INFO: stderr: ""
Aug 19 01:47:30.722: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 19 01:47:30.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Aug 19 01:47:32.182: INFO: stderr: ""
Aug 19 01:47:32.182: INFO: stdout: "update-demo-nautilus-27p57 update-demo-nautilus-vbpvj "
Aug 19 01:47:32.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27p57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:33.459: INFO: stderr: ""
Aug 19 01:47:33.459: INFO: stdout: ""
Aug 19 01:47:33.459: INFO: update-demo-nautilus-27p57 is created but not running
Aug 19 01:47:38.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Aug 19 01:47:39.780: INFO: stderr: ""
Aug 19 01:47:39.781: INFO: stdout: "update-demo-nautilus-27p57 update-demo-nautilus-vbpvj "
Aug 19 01:47:39.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27p57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:41.069: INFO: stderr: ""
Aug 19 01:47:41.070: INFO: stdout: "true"
Aug 19 01:47:41.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27p57 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:42.367: INFO: stderr: ""
Aug 19 01:47:42.367: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 01:47:42.367: INFO: validating pod update-demo-nautilus-27p57
Aug 19 01:47:42.373: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 01:47:42.373: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 01:47:42.373: INFO: update-demo-nautilus-27p57 is verified up and running
Aug 19 01:47:42.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbpvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:43.668: INFO: stderr: ""
Aug 19 01:47:43.668: INFO: stdout: "true"
Aug 19 01:47:43.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbpvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Aug 19 01:47:44.977: INFO: stderr: ""
Aug 19 01:47:44.977: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 19 01:47:44.977: INFO: validating pod update-demo-nautilus-vbpvj
Aug 19 01:47:44.983: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 19 01:47:44.983: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 19 01:47:44.983: INFO: update-demo-nautilus-vbpvj is verified up and running
STEP: using delete to clean up resources
Aug 19 01:47:44.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3126'
Aug 19 01:47:46.269: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 19 01:47:46.269: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 19 01:47:46.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3126'
Aug 19 01:47:47.605: INFO: stderr: "No resources found.\n"
Aug 19 01:47:47.606: INFO: stdout: ""
Aug 19 01:47:47.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3126 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 19 01:47:48.911: INFO: stderr: ""
Aug 19 01:47:48.912: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 19 01:47:48.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3126" for this suite.
Aug 19 01:47:54.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 19 01:47:55.114: INFO: namespace kubectl-3126 deletion completed in 6.193053911s

• [SLOW TEST:55.467 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSAug 19 01:47:55.116: INFO: Running AfterSuite actions on all nodes
Aug 19 01:47:55.117: INFO: Running AfterSuite actions on node 1
Aug 19 01:47:55.117: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6917.434 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS